How do they (oklch & oklab) compare for different uses?
OKLCH is a polar coordinate space. Hue is angle in this space. So to interpolate hue from one angle to another, to get from one side of a circle to the other, you go round the edge. This leads to extreme examples like the one shown:
linear-gradient(in oklch, #f0f, #0f0)
You can also go round the circle the other way, which will take you via blue–aqua instead of via red–yellow: linear-gradient(in oklch longer hue, #f0f, #0f0)
The gradient shown (in either case) is a good example of a way that perceptual colour spaces are really bad to work in: practically the entire way round the edge of the circle, it’s outside sRGB, in fact way outside of the colours humans can perceive. Perceptual colour spaces are really bad at handling the edges of gamuts, where slightly perturbing the values take you out of gamut.Accordingly, there are algorithms defined (yes, plural: not every application has agreed on the technique to use) to drag the colour back in-gamut, but it sacrifices the perceptual uniformity. The red in that gradient is way darker than the rest of it.
When you’re looking for better gradients, if you’re caring about perceptual uniformity (which frequently you shouldn’t, perceptual colour spaces are being massively overapplied), you should probably default to interpolating in Oklab instead, which takes a straight line from one side of the circle to the other—yes, through grey, if necessary.
linear-gradient(in oklab, #f0f, #0f0)
And in this case, that gets you about as decent a magenta-to-lime gradient as you can hope for, not going via red and yellow, and not exhibiting the inappropriate darkening of sRGB interpolation (… though if I were hand-tuning such a gradient, I’d actually go a bit darker than Oklab does).During its beta period, Tailwind v4 tried shifting from sRGB to Oklch for gradient interpolation; by release, they’d decided Oklab was a safer default.
"OK" because "it does an ok job" according to its creater Björn Ottosson.
With that out of the way, I'd like to go on a tangent here: can anyone explain the modern trend of not including publishing dates in blog articles? It stood out to me here in particular because the opening sentence said that "OKLCH is a newer color model" and the "newer" part of that sentence will get dated quicker than you think. The main site does mention a date, but limits it to "August 2025" so this seems like a conscious choice and I just don't get it.
For this specific gradient, see https://oklch.com/#0.7017,0.3225,328.36,100 and https://oklch.com/#0.86644,0.294827,142.4953,100, and look at the Chroma panel, see how far out of our screen gamuts they are (even tick “Show Rec2020”, which adds a lot of chroma around blue–green and magenta–red), and try to imagine the colours between the lime and magenta (in either direction). The red direction is probably the easier to reason about: there’s just no such colour as a light, bright red. You can have bright or light, but not both. (Its 3D view can also be useful to visualise these things: you’re building a straight-line bridge between two peaks, and there’s a chasm in between.)
> In the example above, you can see that the OKLCH colors maintain consistent blueness across all of the shades, while in the HSL example, the lighter shades drift to purple and the darker ones muddy out towards grayish.
I see lots of automatic palette generator projects where the shades of each color are generated with OKLCH by only varying the lightness value on some chosen base color. The problem I find is if you look at popular open source palettes, the way the hand-crafted hue and saturation values vary across the shades for different hues isn't that predictable (the curve of the hue/saturation values over shades aren't straight lines or typical easing curves).
Hawking my own tool (using HSLuv with RGB for now), but you can load and compare the hue and saturation curves as they vary over shades of a color using example palettes from Tailwind 3, USWDS and IBM Carbon, plus tweak each shade to your liking:
https://www.inclusivecolors.com/?style_dictionary=eyJjb2xvci...
So I think OKLCH is a nice starting base for palettes and a quick way to generate a color you need in CSS, but I think designers will always need to tweak the hue and saturation of each shade so it looks just right as there's no single right answer you could encode into the color space.
Also check out oklch.com, I found it useful for building an intuition. Some stumbling blocks are that hues aren’t the same as HSL hues, and max chroma is different depending on hue and lightness. This isn’t a bug, but a reflection of human eyes and computer screens; the alternative, as in HSL, is a consistent max but inconsistent meaning.
Another very cool thing about CSS’s OKLCH is it’s a formula, so you can write things like oklch(from var(--accent) calc(l + .1) c h). Do note, though, that you’ll need either some color theory or fiddling to figure out your formulas, my programmer’s intuition told me lies like “a shadow is just a lightness change, not a hue change”.
Also, OKLCH gradients aren’t objectively best, they’re consistently colorful. When used with similar hues, not like the article’s example, they can look very nice, but it’s not realistic; if your goal is how light mixes, then you actually want XYZ. More: https://developer.mozilla.org/en-US/docs/Web/CSS/color_value....
Also, fun fact: the “ok” is actually just the word “ok”. The implication being that LCH was not OK, it had some bugs.
With RGB you order green salad you get green salad.
With OKLCH you order green salad you get beet soup.
And yes, both oklch gradients look pretty weird while the oklab gradient looks nice (if you can accept it going through grey).
I find APCA is a little stricter than WCAG for light themes, and APCA is much stricter than WCAG for dark themes, to the point where you really shouldn't use WCAG for dark themes. So most of the time APCA is giving you stricter contrast that easily pass WCAG also.
I keep seeing mentions that APCA will let you finally use e.g. white on orange, or white on vibrant blue that pass APCA but fail on WCAG, but my feeling is there's not a lot of examples like this and most of these pairings only have okay contrast anyway, not great contrast, so it's not ideal to be stuck with WCAG's false negatives but not that big of a deal.
edit: Also, you mentioned the colors "beyond the ranges of human perception" but I don't think there is any such limitation here, the bottleneck is the hardware (computer monitors).
Apologies! (I can't delete the post though, feel free to down-vote into oblivion)
I only bring it up because I had a situation last week where the better APCA was giving results for both white-on-colour and #111-on-colour as suitable for headline copy under WCAG3, but #111-on-colour was 7.5:1 and white-on-colour was 2.5:1 under WCAG2, hence we could only use one of them legally.
https://evilmartians.com/chronicles/oklch-in-css-why-quit-rg...
Along with their picker / converter here:
Discussed on Hacker News here:
https://news.ycombinator.com/item?id=43073819 (6 months ago, 30 comments)
Yeah I understand, would you agree this is fairly rare when using APCA though?
I've had the opposite where the brand guide was suggesting we use a light on dark combo that passed WCAG2, yet it failed APCA, and worst of all clearly had poor contrast just by looking at it. Yet, some people will still go with it because WCAG2 gave it the okay haha.
The first uses oklch(0.65 0.20 300), comfortably inside sRGB, not even at the boundary. The second uses oklch(0.65 0.28 300), which is well outside P3 and even Rec.2020.
The smallest fix would be to make the second one oklch(0.65 0.2399 300) to bring it inside P3 so the demo doesn’t get slightly warped if Rec.2020-capable (not really necessary, but preferable, I’d say), and the first #a95eff (oklch(0.6447 0.2297 301.67)) which is CSS’s fallback.
But purple is also pretty much the worst choice for such a demo—P3 adds the least to sRGB around there, so the difference will be smallest. A better choice is red or green.
So a better pair would be oklch(0.65 0.2977284 28) on the right (a bright red at the very edge of the P3 gamut, well outside sRGB) and #f00 on the left (the sRGB value CSS will map it to if out of gamut).
I keep seeing new tutorials on designing accessible palettes that still use HSL, where the WCAG2 contrast breaks and goes all over the place as you vary the hue and saturation. HSLuv makes life so much easier here and lets you focus on exploring colors that you know will pass, using a familiar looking color picker.
In such cases, I usually try to see if the `Last-Modified` header served with the HTML document over HTTP, can be useful, but I conclude that often the same people who don't bother with dating their content -- you'd think they'd understand where the word _blog_ comes from, as in "[web]-log" where timestamps are paramount -- these same people don't know or care how HTTP works. Hint: the `Last-Modified` is the last modification time of the _resource_, in this case the actual HTML document. Just because your "backend" re-rendered the content because you didn't bother with setting up your server caching correctly, doesn't mean you should pretend it's a brand new content every day (which https://jakub.kr/components/oklch-colors does, unfortunately, so you won't know the timestamp from HTTP).
I am not sure what the status is.
colors are much more accurate in terms of
how humans perceive them and it makes
working with them much easier
This is an aim, which I am not sure fully addressed.
Think of the reason why can't I get oklch or any other method of color definition in HTML to produce the "gold color" as we humans think of it.The fundamental reason you can't get a truly convincing "gold" color using a single value in oklch, rgb, or any other color definition is because gold is a material, not a color.
What our brain perceives as "gold" is not a single, flat hue. It's a complex interplay of light, reflection, and texture.
Point is, even with sophisticated CSS involved with linear gradient it is still a challenge
See demo at https://jsfiddle.net/oyw9fjda/
---
<style>
.flat-gold {
width: 200px;
height: 200px;
/* Using Oklch for a perceptually uniform yellow */
background: oklch(85% 0.11 85);
}
.gradient-gold {
width: 200px;
height: 200px;
background: linear-gradient(
135deg,
oklch(60% 0.1 65), /* Dark, desaturated shadow (brownish) */
oklch(85% 0.11 85) 45%, /* Rich mid-tone gold */
oklch(98% 0.1 90) 50%, /* Sharp, bright highlight (almost white-yellow) */
oklch(85% 0.11 85) 55%, /* Back to the mid-tone */
oklch(70% 0.1 75) /* Softer shadow on the other side */
);
}
.conic-gold {
width: 200px;
height: 200px;
border-radius: 50%; /* Often looks best on a circle */
background: conic-gradient(
from 90deg,
#B38728, /* Darker Start */
#FEE9A0, /* Bright Highlight */
#D4AF37, /* Mid-tone */
#FEE9A0, /* Another Highlight */
#B38728 /* Darker End */
);
}
.premium-gold {
width: 200px;
height: 200px;
background-color: #B38728; /* Fallback color */
background-image:
/* Layer 1: Sharp highlight on top */
linear-gradient(
175deg,
rgba(255, 253, 240, 0.6) 0%,
rgba(255, 215, 0, 0.2) 40%,
rgba(136, 96, 0, 0.5) 90%
),
/* Layer 2: Base metallic gradient */
linear-gradient(
105deg,
transparent 35%,
#FEE9A0 48%,
#D4AF37 52%,
transparent 65%
);
border: 2px solid oklch(60% 0.1 65);
box-shadow: inset 0 0 10px rgba(0,0,0,0.4);
}
</style>
<div class="flat-gold"></div>
<hr />
<div class="gradient-gold"></div>
<hr />
<div class="conic-gold"></div>
<hr />
<div class="premium-gold"></div>
There is a very clear shift towards green in the OKLCH lightness value change example, enormously more so than any purple vibe in the HSL example.
Clearly being able to select colours of the same perceptual intensity has value, but some of the claims here as to the benefits are exaggerated.
While it’s more useful for a perceptual color space, relative colors are supported for all CSS color spaces e.g.
background-color: rgb(from var(--base-color) calc(r - 76.5) g calc(b + 76.5));
Edit: I would imagine that the only way to definine a perceptually uniform color space is by tons of user testing. This is how Munsell developed his color space… specifically presenting test subjects with pairs of identical and near-identical color swatches and asking if they could tell the difference.
In this way, pairwise comparisom of similararity became the bedrock of color perception science.
Isn't it any continuous function that starts at a specified color and ends at another specified color?
How then does one say that any gradient is good or bad?
Isn't the problem you are highlighting guaranteed to exist for any colorspace that defines colors outside of human perception?
Moving the monochromatic BT.2020 colors from 630 nm, 532 nm and 467 nm could get a little increase in color space coverage, but at the expense of a lower efficiency in power consumption to brightness conversion. 467 nm is not a very pure blue, but the sensitivity of the eye drops very quickly in the blue region, so a better blue would require much more power. Similarly, though not so pronounced, for a different green.
Moreover, in the green region there is a gap, both for lasers and for LEDs, where the available devices have low efficiencies for converting electrical power to light, so changing the frequency of the primary green color would have to take that into account too.
In conclusion, I believe that the BT.2020 (actually BT.2100) color space is close to the best that can be done in displays of reasonable price and energy efficiency.
A true coverage of 100% of the BT.2100 color space can be realized only with laser projectors. Any display with LEDs or quantum dots will never have really monochromatic primary colors, though a coverage of significantly more than 90% of the BT.2100 color space is not too difficult. However, the advertised percentage of the color space may be misleading, because it varies depending on the kind of color space used for computations. A coverage percentage computed in OKlab would be more informative than a percentage computed in the XYZ color space.
> But as @c-blake has rightly pointed out, this doesn't take into account the ratio of visible background pixels to foreground pixels. For example the contrast required for a single fullstop character, ".", is goting to be diffrent from a capital, "B".
So APCA includes more guidance on font weight and font size for more contexts (e.g. headings, body text, shorter text, copyright notices), but it's still going to be an approximation for edge cases like displaying a single fullstop character. If this case is common, you'd want to increase the contrast value instead of going with the minimum passing value, so the contrast algorithms can still help you.
There's a tradeoff with having guidelines that are very accurate (e.g. a contrast algorithm that counts pixels) vs simpler to follow (e.g. recommended font weight and size). People already find WCAG2 hard to follow as it is.
This totally has uses but it is not, as claimed, "there is no hue or saturation drift" given the hue has shifted so much.
The newer CIE colorspaces (CIECAM02 CIECAM16) seem to address the effect, that color-perception goes wild if you change background and illumination. Oklab seems to only make some fixed choice about how to include the chromatic part to lightness. I'm not so sure, how this definition of lightness is any better than grayscale (from 1931).
I was referring to the new RGB LED. From what I have read so far seems to be doing far better than LED or quantum dots.
But I guess we will have to settle with BT2100 then.
Unless you want to make displays for the bees and birds (and the tiny alleged minority of human tetrachromats) that seems rather pointless. People with three color receptors are your customers, so that's driving the market.
I rather predict, future displays will further improve on contrast, including very bright (brilliant) sparks. Imagine an iPhone glittering like gold or diamonds ...
[0] https://gist.github.com/dkaraush/65d19d61396f5f3cd8ba7d1b4b3...
I always understood those “muddy midpoints” as a failure to properly gamma correct the interpolation between two (s)RGB colors. Is that happening here, or is the mud coming from something else?
I thought yours was an honest question that warrants an answer (which thankfully Chris answered).
If that's a correct implementation of OKLCH, then it's not something I would ever touch. Something seems to be deeply wrong with however they're calculating hue.
HSL/HSV have issues with perceptual lightness. But not with hues. The hue is constant and doesn't need any correction depending on saturation or lightness.
https://oklch.com/#0.7684,0.1754,218.1,100
To increase brightness past the limit of the Hue band for the color, the rendered output color on a display shifts cyan due to the limited brightness range of a saturated blue in the Display P3 color space. OKLCH is, when varying brightness along a gradient, Saturation-invariant rather than Hue-invariant; whether that effect is desirable is a matter of aesthetic preference, but after decades of Hue-invariant desaturated web color, it’s certainly refreshing to have a choice about which compromised invariance to take.
https://news.ycombinator.com/item?id=44588388
Science recently invented a much-deeper blue LED than we have now, so I expect in a decade or two, whatever ends up succeeding Display P3 will be much more able to represent that gradient without cyan-shift, and all of the years of OKLCH gradients created before that time will end up showing a more accurate blue gradient with only the colorspace change. In the meantime? Do whatever’s aesthetically pleasing; there is no Right Answer in design :)
ps. One could argue that a post-OKLCH colorspace should not accept the binary decision of Hue or Saturation invariance, and instead should be Saturation-invariant to the display’s native limit and then transition smoothly to Hue-invariance at that threshold. I believe that’s a Difficult Problem in color profile specification terms, since it isn’t just pre-calculating the changeover threshold for all monitors (not to mention, do you change blue sooner or same as red, etc) but it’ll be a while longer before I’m versed enough in ICCv4 to explain that perception. It sure would make for an interesting experimental DisplayCAL target, though!
[EDIT] Ahh.. The W3C has already looked at this. https://www.w3.org/Graphics/Color/Workshop/slides/talk/lille...
>These examples are using the WCAG2 contrast algorithm which is well known
Only one of the 4 tables shown is the thing you say is the known-to-be-flawed WCAG2 one. Some counterxamples are listed for all 4 formulas, though, 2 of which use the CIE Lightness (which, sure, is probably different, but I believe the CIE L is what APCA is based upon - in spite of so..many..words on their doc pages they often just say "lightness").
------------------------
Another point of those 4 tables, perhaps more clear when looking at the python script, is whether "numerical ratio" vs abs(difference) is better. It seems to me that color space designers, like this OKLCH, are going after "perceptual linearity" which suggests abs(diff) is far more appropriate than a "ratio" which has "near zero" troubles (and zero & one are downright seductive numbers for perceptual lightness scales).
I certainly should learn more about it, but various "click through" APCA things I've seen seem to speak in ratio terms like "10 times the contrast" (though admittedly that only assumes some scale for contrast not that it's formulated as a ratio - it's just suggestive). So, I should probably look more into it before actually offering a critique, but it still has the feeling of "cross purposes" - using some color space axis designed for [0,1] linearity differences instead for ratios within that axis. When I tried using the WCAG2 one I was kind of stunned how sensitive everything was to what should have been a kind of "arbitrary adjustment" to handle near-zero.
I might wonder what designers of color spaces actually have to say about this ratio vs. difference issue if you know of any articles. You seem knowledgeable. The spaces seem literally designed for differences to me.
Usually you fix it by moving your point through a different colour space. Choice depends on your requirements and mediums you're working with (normally different types of light sources or screens).
I had to write a low level colour interpolation librar for a few interactive art projects, so I dipped a bit into this, but I'm no colour expert
On some blogs I can only tell the timeframe of the content from the timestamps on the comments ... but many blogs like the OP's don't support comments. I'm not likely to revisit them. (The blurb on the OP's main page is ironic ... rather than obsessing over the smallest details I see obsession over esthetics to the detriment of functionality.)
I looked into it a bit more, and it turns out it's a result of OKLCH easily producing colors out of gamut, and then choosing to sacrifice hue accuracy for better saturation accuracy.
That's a fundamental design flaw if you ask me. Changing hue is completely unacceptable in my book.
https://bottosson.github.io/posts/oklab/#what-about-existing...
The border of the color space corresponding to monochromatic colors is convex, so any triangle inscribed in that curve border will leave parts of the color space that cannot be reproduced on a display.
The convexity of the border means that if you compute the 3 coefficients of RGB colors that match a desired color, there will always be some colors that need one negative coefficient, regardless of the choice for the RGB primary colors. On any display, it is impossible to realize a negative brightness of a color, so on a RGB display there will always be some colors whose hue and brightness can be reproduced, but their saturation cannot be reproduced.
The only way to make smaller the parts of the color space that cannot be reproduced is to take more points on the monochromatic border, replacing the triangle of reproducible colors with a polygon that fits more closely the curved border.
The entire color space perceived by humans could be reproduced on a display only with tunable lasers, not with fixed-frequency primary colors. One could use fixed-frequency primary colors only if they would stimulate directly the photoreceptors in the eye, to enable workarounds for the overlapping filter characteristics of the cones and for the signal processing done in the retina.
https://github.com/hazelgrove/hazel/blob/dev/src/web/www/sty...
Basically no difference at all.
It’s a “color” because it’s useful to describe such a thing. If you had monitor entirely filled with 50% white you’d call it white. Only by comparing it to something brighter do you call it gray. Brown is the same thing. In a dark room if you looked at a monitor filled with red and green pixels you’d call it orange. Only when you start adding in clues like whites and brighter colors would you call it brown.
Anyway, yes grey is a color. But it is not quite the same as other colors. Other colors occupy only parts of the visible electromagnetic spectrum. Whites are the whole thing.
There is actually several very good technology connections videos about this stuff. Color is very cool!
So, depending on what you're doing, you want different things. You may want to view your color space as an RGB cube, and go through gray. Or you may want to view your color space as something more like HLS or OKLCH, and not go through gray.
Which brings me to point at the crux of the matter: avoid gradients between two dissimilar colors in the first place.
Failure to do this conversion is what leads to the bad results when interpolating: going from red to green will still go through grey but it should go through a much lighter grey compared to what happens if you get the interpolation wrong.
Also if I use the macOS app "Digital Color Meter" I get essentially the same green value for the rightmost OKLCH and HSL colors (226 and 227 for "display native values" and 228 and 227 for sRGB).
See slide 19: https://www.w3.org/Graphics/Color/Workshop/slides/talk/lille... -- if you ask CIELAB to make "pure blue" (RGB 0 0 100%) become grayscale, the intermediate colors become purple to the human eye. The entire point of a perceptual color space is that that doesn't happen. OKLCH fixes that.
BTW, credit to Björn Ottosson, who basically side-projected a color space into the web standards and more: https://bottosson.github.io/posts/oklab/ ... folks like him are why we sometimes have nice things!
One last thing:
> This way the browser uses OKLCH colors if they are supported, otherwise it falls back to sRGB.
This wording suggests it’s about gamuts, but it’s actually about syntax. It will use the oklch() if that syntax is supported, and then the OKLCH value may be within sRGB or may be in another gamut the screen is using or may need to be mapped back to the screen’s gamut. Whereas #rgb values are inherently limited to sRGB, the baseline.
...with varying definitions of "whole". D65 white is almost blue when compared to A white. It stops being either “all the visible spectrum" or "in equal proportions” pretty quickly once you look closer at it.
For a spot color (from a gel covering a light) the light diffuses further from the center of the projected light — two spot colors (with different gels) then, next to each other, would give a kind of gradient from one color to the next as you walk a line from the center of one light to the other.
I wonder what the closest analog to this is algorithmically?
I guess where I am going with this is: is there precedent in nature as to how gradients are supposed to work (and therefore an analog which we should try to model) or are we going strictly on how the human eye perceives color and what algorithm we think "looks" right?
It's normal and expected for saturation to change. And for brightness to get clipped. But not for hue to change.
That's the critique of OKLCH here, that changing hue is a bizarre and undesirable choice.
That is a gigantic difference. Those are totally different hues. Which are, of course, exactly the difference we're seeing.
The 2x2 table in that contrast experiments link I sent enumerates some differences along the edge cases { even with just |diff|s. }. Just empirically if you change that 0.05 to 0.02 or 0.10 things change "a lot" in terms of all the edge cases. You can try fiddling with running that Python script yourself and see.
Also, I believe the project of an actual "contrast measurement" - not merely threshold checking - is a worthy goal. I think it would be good to be able to say how bad, and for that the specific monotonic transformation absolutely matters, and again, I expect the color space designer people have opinions on this very worth listening to. I think they are targeting differences in the numbers being the most meaningful thing.
All that said, I did like your George Box quote. :-) I just don't think dismissing the problem is a great solution here. I'm not sure there is a great solution. But you & anyone are always free to find any problem uninteresting. I mean, you could also find all the color space distinctions of TFA similarly "no real difference".
I suppose that's different with light than some analog with pigments? (Two dabs of color set apart, a brush perhaps used to blend them as continuously as is possible.)
The OKLCH lighter shade veers off into bright cyan! I don't see HSL getting grey on the dark side.
There is absolutely no perceptual hue shift in the HSL/HSV models depending on saturation and lightness/value, or when they get translated into sRGB. That's the entire point of HSL/HSV, to isolate hue and hold it constant.
HSL/HSV are not perceptually uniform in terms of brightness when hue is changed, or even brightness linearly. But hue is hue. Perceived hue does not change based on brightness or saturation.
https://theconversation.com/how-rainbow-colour-maps-can-dist...
https://www.poynter.org/archive/2013/why-rainbow-colors-aren...
(Since this reminded me of it, a random pro tip: For those using macOS Terminal.app, you can redefine the 16 ANSI colors using the full P3 colorspace, so long as you use the full GUI color picker rather than the sRGB-limited #rrggbb entry method. Access to improved saturation helps the eye distinguish different shades in the dim and bright color sets more effectively without having to alter their brightness. It won’t improve the limitations of ANSI color as a whole — the insistence on Luminosity = f(R,G,B) is baked into everyone’s assumptions quite deeply thanks to sRGB! — but it does at least mean you can have seven equidistant and non-desaturated, non-sRGB colors at two levels of brightness for syntax highlighting and other typical 16-color uses.)
Also, I do hope people begin to see the value of "designing toward intuition," I can't help but notice that efforts like these are exact opposite of what happened when (most of) the world forcibly converted to metric.
So first, thank you for the correction. It's fantastic to learn something new. And second, do you have any idea why the OKLCH example on the page is so atrociously bad? The way blue changes to cyan is even worse than the HSL/HSV difference of blue-purple. It's like the cure is worse than the disease. If they were so concerned with hue fidelity in the first place, I'm surprised they wound up producing an end result just as bad.
> you should probably default to interpolating in Oklab instead
The article says as much. Quoting:
> This can be a double edged sword. While some gradients might look smoother, you might also see colors that you've never defined. This is because hue in OKLCH is circular and gradients can take unexpected detours.
> To avoid this, many tools use OKLAB for gradients, which interpolates in a straight line and gives more consistent results.
Aren't these in direct conflict? If you can't resaturate it, that implies it's not desaturated.
> I think if you buy a tie-dye shirt or phone case and it comes out half grey, despite it being a valid color, most folks will be disappointed.
And if you buy a forest motif, people will be upset if it's pink. That's just doing a tie-dye wrong, not a rebuke of whether it's a color at all.
Kinda. There's a singularity in the math. The problem is that hue is defined as an angle and saturation is defined as distance from the center, but there's no consistent way to define a direction for the origin. Black and white have the same problem because they're also desaturated.
> And if you buy a forest motif, people will be upset if it's pink. That's just doing a tie-dye wrong, not a rebuke of whether it's a color at all.
I'm not arguing that it's not a color, just that it doesn't belong in all gradients!
I thought the same until I googled "blue and orange tie-dye." I'll be honest, more white and black than I expected!
> So yes there are many answers but because it's not a quintessential example of a gradient.
We may have to agree to disagree that tie-dye isn't a quintessential example of a gradient. Would you argue that rainbows aren't either?
A magenta to green gradient would then go through white rather than grey. A subtractive magenta-green gradient would go through black. Not sure what physical setup would produce the latter gradient. But the standard RGB (or OKLAB) gradient goes through grey rather than white or black. This type of gradient is physically created by dithering: Dithering a gradient from magenta to green, by just using these two base colors, would produce a perceptual grey in the middle. This type of color mixing is otherwise better known as alpha blending.
> Since different hues have a different perceived brightness, another way you can change the brightness of a color is by rotating its hue.
> To make a color lighter, rotate the hue towards the nearest bright hue — 60°, 180°, or 300°.
> To make a color darker, rotate the hue towards the nearest dark hue — 0°, 120°, or 240°.
> This can be really useful when trying to create a palette for a light color like yellow. By gradually rotating the hue towards more of an orange as you decrease the lightness, the darker shades will feel warm and rich instead of dull and brown
You could argue whether this is "perceptual uniformity" or something else, but the fact is that to create a realistically useful colour palette with a bunch of shades, you definitely cannot simply use HSL, keep the hue constant, adjust S/L. It's not that easy (and OKLCH doesn't make it that easy either).
Putting white or black in between adds another anchor point.
And looking more around examples of blue and orange tie dye, most aren't really gradients overall, they have big splotches of solid color with small gaps or overlaps in between, and at least half the time the gaps and overlaps don't even have a gradient inside them.
> We may have to agree to disagree that tie-dye isn't a quintessential example of a gradient. Would you argue that rainbows aren't either?
Hmm. How about this. I would say a rainbow is not a gradient between two colors, and the color space discussion is about a gradient between two colors. The exact border of "quintessential" is not something I really want to spend too much time on.
I had claude build me a comparison (not sure why 2x the same HSL one) https://i.imgur.com/uziQibR.png
Also super hard in RGB: https://jsfiddle.net/nhgvzm5p/2/ it's just a 2 color OKLCH gradient:
oklch(0 0.07 279) 66%,
oklch(0.98 0.09 276) 99%
- how to easily add "warmth". you can't just add red+green - no good tools for it (e.g. a nice gradient picker ui that lets me specify if I want to allow running out of bounds of teh color space and clip a few colors at max saturation)
That is the biggest problem with these colour spaces: the edges are unclear, and overflowing them has bad effects.
if I sample 5 colors and throw them in as rgb?
the 3 color rgb one has very similar banding
this is _especially_ vital if you're not from color science & color correction industry, because that means you have no prior experience(s) with many problems & implications faced in the last 30 years of say, film post-production, which might have seen or solved some of these newly experienced "problems", which might've been caused by hardware choice(s) for example.
I recently implemented both, first I started with OKLab which turned out really well, the gradients you get from it are amazing and the usual color sets (analogous etc.) produce really pleasing sets.
However I quickly ran into the main problem with it, which is that fiddling with its Lightness, Chroma and Hue dials doesn't produce human understandable results. For example sometimes changing L or C induces a color shift, or for some given values changing L only gives midrange values that doesn't go up or down all the way.
I then implemented OKLCH on top of that, which was the way I assumed everyone was doing it. Just have it act as the controller for the human layer, then convert to OKLab for creating gradients etc. The article doesn't really go into it, but having OKLCH as the frontend controller fixes the LCH sliders such that they produce values that make sense to us humans, while still having the superior OKLab setup in the back.