←back to thread

87 points davidbarker | 1 comments | | HN request time: 0.214s | source
1. rapjr9 ◴[] No.37747531[source]
I wonder how it handles noisy environments, low volume when talking, and TV or music playing in the background. I've tried using pocket recorders while driving and the result can be unintelligible. Would be more useful if it could do a rough interpretation of activities (based on sound or accelerometer) to make it easier to place transcriptions in context. Something like <music playing: 45 minutes> <walking: 3 minutes> <door opening: 1 minute> <transcription of conversation: 10 minutes>

Reminds me of Deborah Estrin's work on Participatory Sensing:

https://scholar.google.com/citations?hl=en&user=3_WYcR4AAAAJ...

I remember a conference meeting where her team wore cellphones around their necks, set to automatically capture images every few seconds and didn't reveal it until they presented a paper about it, including just-captured images of all of us! Everyone was a bit surprised, some people got upset and I think she promised to delete all the images. She's done some work on recording diet with this kind of periodic camera capture also. Audio detection of eating is possible too. A life diary would be something many people find useful, especially those with memory issues (which is just about everyone to some extent). The privacy implications are severe though.