Here's an idea: We should be able to record the full experience stream (or just the laptop screen, let's say) for a person -- digitize it suitably (eg. pull out all the text, or tokenize it with a VLM, etc), attach some metadata, and have the whole database ready for a person to query / play with, using powerful LLMs.
To the extent that you can store everything and perform effective recall/search, it obviates the need to carefully "pre-process" bits by noting them down in the right file, tagging them appropriately, adding metadata, etc. All of which should make for a much nicer UX.
Given the Jevons' paradox though -- I wouldn't be surprised if our experience stream becomes so dense/rich that storing everything and querying from it eventually becomes prohibitively expensive. We would then have to construct pre-processing rules to prune the stream, and amortize the cost of certain deductions by performing them at data ingestion time instead of repeating it for each query. It's all just the basic principles of system design at the end of the day, and how systems need to be rearranged as various component capabilities (and user needs) scale.