←back to thread

218 points ahamez | 4 comments | | HN request time: 0.629s | source
Show context
crispyambulance ◴[] No.42728529[source]
Every time I see stuff like this it makes me think about optical design software.

There are applications (Zemax, for example) that are used to design optical systems (lens arrangements for cameras, etc). These applications are eye-wateringly expensive-- like similar in pricing to top-class EDA software licenses.

With the abundance GPU's and modern UI's, I wonder how much work would be involved for someone to make optical design software that blows away the old tools. It would be ray-tracing, but with interesting complications like accounting for polarization, diffraction, scattering, fluorescence, media effects beyond refraction like like birefringence and stuff like Kerr and Pockels, etc.

replies(10): >>42728932 #>>42728962 #>>42729610 #>>42730664 #>>42730756 #>>42731720 #>>42732069 #>>42733041 #>>42736387 #>>42739407 #
hakonjdjohnsen ◴[] No.42728962[source]
This, very much this!

I do research in a subfield of optics called nonimaging optics (optics for energy transfer, e.g. solar concentrators or lighting systems). We typically use these optical design applications, and your observations are absolutely correct. Make some optical design software that uses GPUs for raytracing, reverse-mode autodiff for optimization, sprinkle in some other modern techniques you may blow these older tools out of the water.

I am hoping to be able to get some projects going in this direction (feel free to reach out if anyone are interested).

PS: I help organize an academic conference my subfield of optics. We run a design competition this year [1,2]. Would be super cool if someone submits a design that they made by drawing inspiration from modern computer graphics tools (maybe using Mitsuba 3, by one of the authors of this book?), instead of using our classical applications in the field.

[1] https://news.ycombinator.com/item?id=42609892

[2] https://nonimaging-conference.org/competition-2025/upload/

replies(2): >>42732901 #>>42733911 #
accurrent ◴[] No.42733911[source]
Sounds a bit like https://github.com/mitsuba-renderer/mitsuba2
replies(1): >>42734724 #
hakonjdjohnsen ◴[] No.42734724[source]
Yes, exactly. I have not looked at Mitsuba 2, but Mitsuba 3 is absolutely along these lines. It is just starting to be picked up by some of the nonimaging/illumination community, e.g. there was a paper last year from Aurele Adam's group at TU Delft where they used it for optimizing a "magic window" [1]. Some tradeoffs and constraints are a bit different when doing optical design versus doing (inverse) rendering, but it definitely shows what is possible.

[1] https://doi.org/10.1364/OE.515422

replies(2): >>42735657 #>>42735658 #
1. roflmaostc ◴[] No.42735658[source]
Shameless plug, we use Mitsuba 3/Dr.JIT for image optimization around volumetric 3D printing https://github.com/rgl-epfl/drtvam
replies(2): >>42735866 #>>42736651 #
2. hakonjdjohnsen ◴[] No.42735866[source]
Looks really cool! I look forward to reading your paper. Do you know if a recording of the talk is/will be posted somewhere?
replies(1): >>42739387 #
3. pjmlp ◴[] No.42736651[source]
It looks quite interesting, especially the part of scripting everything in Python with a JIT, instead of the traditional having to do everything in either C or C++.

Looking forward to some weekend paper reading.

4. roflmaostc ◴[] No.42739387[source]
We presented this work at SIGGRAPH ASIA 2024. But I think they do not record it?

Maybe in some time we also do an online workshop about it.