←back to thread

197 points amichail | 3 comments | | HN request time: 0.197s | source
Show context
consumer451 ◴[] No.41865107[source]
The most complete plan for this was proposed by JPL's Slava Turyshev and team. It has been selected for Phase III of NASA Innovative Advanced Concepts. [0]

> In 2020, Turyshev presented his idea of Direct Multi-pixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravitational Lens Mission. The lens could reconstruct the exoplanet image with ~25 km-scale surface resolution in 6 months of integration time, enough to see surface features and signs of habitability. His proposal was selected for the Phase III of the NASA Innovative Advanced Concepts. Turyshev proposes to use realistic-sized solar sails (~16 vanes of 10^3 m^2) to achieve the needed high velocity at perihelion (~150 km/sec), reaching 547 AU in 17 years.

> In 2023, a team of scientists led by Turychev proposed the Sundiver concept,[1] whereby a solar sail craft can serve as a modular platform for various instruments and missions, including rendezvous with other Sundivers for resupply, in a variety of different self-sustaining orbits reaching velocities of ~5-10 AU/yr.

Here is an interview with him laying out the entire plan.[2] It is the most interesting interview that I have seen in years, possibly ever.

[0] https://en.wikipedia.org/wiki/Slava_Turyshev#Work

[1] https://www2.mpia-hd.mpg.de/~calj/sundiver.pdf

[2] https://www.youtube.com/watch?v=lqzJewjZUkk

replies(6): >>41865368 #>>41866873 #>>41867659 #>>41870451 #>>41871463 #>>41874418 #
potamic ◴[] No.41866873[source]
A 6 month integration time is going to generate massive amounts of data. How do they intend to receive all this back from 500 AU away?
replies(2): >>41867194 #>>41867348 #
andy_ppp ◴[] No.41867348[source]
The computer onboard likely merges everything into a final image in space?
replies(2): >>41867359 #>>41867396 #
defrost ◴[] No.41867396[source]
Orbiting instruments typically transmit raw instruments data blocked into lines or segments that are are each surrounded by checksums.

It might be compressed for transmission, but raw data (warts and all) is king .. once it's "processed" and raw data is discarded .. there's no recovering the raw.

Years later raw data can be reprocessed with new algorithms, faster processes and combined with other sources to create "better" processed images.

Onboard hardware errors (eg: the historic Hubble Telescope erros) can be "corrected" later on the ground with an elaborate backpropagated trandfer function that optimally "fixes" the error, etc.

Data errors (spikes in cell values, glitches from cosmic rays, etc) can be combed out of the raw in post .. if smart people have access to the raw.

Baking processing into on board instrument processing prior to transmission isn't a good procedure.

replies(2): >>41867643 #>>41869430 #
1. tlb ◴[] No.41867643[source]
You could store all the data on the satellite, upload new code to process it differently, and download the resulting image. Then, the communications link just has to handle code (several MB up) and images (several MB down) instead of petabytes of data.

The launch mass of a petabyte of SSD is under 10 kg. I don't know if it would survive 17 years of space radiation though.

replies(2): >>41867716 #>>41869323 #
2. defrost ◴[] No.41867716[source]
Well, you could.

I don't think I'd do that.

Ignoring the failure modes of a petabyte of SSD spending decades in deep space, what kinds of things are difficult and|or impossible if you were to

> store all the data on the satellite, upload new code to process it differently, and download the resulting image

?

3. modderation ◴[] No.41869323[source]
Just as a thought experiment, would it be viable to send up an array of traditional hard drives? Arrange them all for use as reaction wheels, then spin them up to persist/de-stage data while changing/maintaining targets.

Probably worse than sending up well-shielded flash, but I don't think the Seagate/WD warranty expressly forbids this usage.