←back to thread

46 points dr-j | 2 comments | | HN request time: 0.415s | source

Hi HN! I’m Johan. I built Dlog, a journaling app with an AI coach that tracks how your personality, daily experiences, and well-being connect over time. It’s based on my PhD research in entrepreneurial well-being.

Edit: here's a video demo so you can see it before downloading: https://www.youtube.com/watch?v=74C4P8I164M - it's unvarnished but I'm told that's how people like it here :)

How Dlog works

- Journal and set goals/projects; Dlog scores entries on-device (sentiment + narrative signals) and updates your personal model.

- A built-in structural equation model (SEM) estimates which factors actually move your well-being week to week.

- The Coach turns those findings into specific guidance (e.g., “protect 90 minutes after client calls; that’s when energy dips for you”).

- No account; your journals live locally (in your calendar). You decide what, if anything, leaves the device.

The problem

- Generic AI coaches give advice without understanding your personality or context.

- Traditional journaling is reflective but doesn’t surface causal patterns.

- Well-being apps rarely account for individual differences or test what works for you over time.

What my research found (plain English)

In my PhD I modeled how Personality, Character, Resources, and Well-Being interact over time. The key is latent relationships: for example, Autonomy can buffer the impact of low Extraversion on social drain, while time/energy constraints mediate whether “good advice” is actionable. These effects are person-specific and evolve—so you need a model that learns you, not averages.

The solution

Dlog pairs on-device journaling analytics with an SEM that updates weekly. You get a running estimate of “what moves the needle for me,” and the Coach translates that into concrete suggestions aligned with your goals and constraints.

Early stories (anonymized from pilot users)

- A founder saw energy dips clustered after external calls; moving deep work to mornings reduced “bad days” and improved weekly mood stability.

- A solo designer’s autonomy scores predicted well-being more than raw hours worked; small boundary changes (client comms windows) helped more than time-tracking tweaks.

Tech & security

- Platform: macOS (Swift/SwiftUI). Data: local storage + EventKit calendar for entries/timestamps.

- Analytics: on-device sentiment + narrative features; SEM computed locally; weekly updates compare to your baseline.

- AI Coach: uses an enterprise LLM API for reasoning on derived features/summaries. By default, raw journal text does not leave the device; you can opt-in per prompt if you want the Coach to read a specific passage.

- Why 61 baseline variables? The SEM needs multiple indicators per construct (Personality, Character, Resources, Well-Being) to estimate stable latent factors without overfitting; weekly check-ins refresh those signals.

What I’ve learned building this

- Users value clarity with depth: concise recommendations paired with focused dashboards, often 5–10 charts, to explain the “why” and trade-offs.

- Cold start matters: a solid baseline makes the first week of insights credibly useful.

- Privacy UX needs to be explicit: users want granular control over what the Coach can read, per request.

I’m looking for feedback on:

- Onboarding (baseline survey and first-week experience)

- Coach guidance clarity and usefulness

- Analytics accuracy vs. your lived experience

- Edge cases, bugs, and performance

Download: https://dlog.pro

If you hit token limits while testing, email me at johan@dlog.pro

Background

PhD (Hunter Center for Entrepreneurship, Strathclyde), MBA (Babson), BComm (UCD). I study solo self-employment and well-being, and built Dlog to bring that research into a tool practitioners can use.

Note: The Coach activates after your first scored entry. If you haven’t written one yet, you’ll see a hold state—add a quick journal entry and it unlocks.

Appearance: On a few Macs the initial theme can render darker than intended. If you see this, switch to Light Mode as a temporary workaround; a fix is incoming.

Edit: For general users it's free for 14 days with 10K free tokens; then its 1.99 per month at the moment. However, for HN readers that DM me or email me with the email they register with, I'll give a free perpetual license so there's no monthly fee; and add 1 million tokens.

1. huem0n ◴[] No.45732150[source]
Please, both hackernews readers and the author take a closer look to see the cracks on the site. I'm fine with using AI generation, but it needs human review especially for legally binding stuff like the privacy policy.

1. Look at the concepts pie on the home page. The text in the pie is unreadable. Its overlapping and overflowing, white text clipping onto a white background with terms like "topic tagging" that are not an actual example. Its like no human looked at the image before putting it on the website. Maybe just a slip up, we all make mistakes, let's keep looking.

2. I didn't understand the data storage/privacy from the video, so let's look at the privacy policy. At one point the policy says "Do we receive any information from third parties?

No, we do not receive any information from third parties."

Right before later saying:

"journal entries or project-related text that you select are sent to the ChatGPT-5 thinking nano API operated by OpenAI."

Open AI *is a third party*! The answer is "Yes we send data to Open AI under these conditions". That's bad.

3. Lets look deeper. The privacy policy says they store 3 things with the first bullet point (in full) being "A unique user ID number that cannot be used to identify you." You're telling me a literal Identification (ID) Number can't identify me? Why does it exist? That is borderline nonsensical.

4. The video has similar vague stuff saying the data is processed locally after saying its going to chatGPT 5.

I'm giving harsh feedback because I want a project like this to exist, be done right, and succeed. I understand "ship fast and iterate". You're going too fast and you're not shipping an MVP, there is lots of feature creep.

Even when everything looks good, people should be hella skeptical about an app that wants to (potentially) harvest extremely personal daily journal logs. When every page smells like "I generated this and didn't fully check it" it makes me imagine how many hidden problems there are in the codebase.

- The kinda-rough AI video tells everyone "I don't have time to record a 5 min video of my own project". If you want me to believe you care, at least hire a narrator on fiver for $20 if you don't like speaking and/or showing your face. Why should I trust what you say you'll do with my most personal data when you don't even show yourself/show a human?

- There's only three important things: pricing, privacy, and the data analysis / coach. Leading with price is good/solved. What's missing is clarity about privacy. The hackernews post is much more clear, the website is not. I don't need more words, I need to know when the data is and is not shared and I need to be convinced you're responsible. Right now stuff like "Dlog’s private AI model" makes it confusing what's local and what's shipped to OpenAI.

- Even when explained clearly, privacy is going to be a problem. Let me use me use my own model/token/url. It's easy to point to a local URL that responds with data in the exact same format as GPT 5. That kind of feature is 10x more important than changing the color of the background.

- I'm not getting a coaching app because it has a good theme engine. Finish talking about coaching/analysis before going into themes and calndars etc. I don't even care how data is entered into the app, until after I know the useful things its doing. Give a real example of insight that changed your daily choices.

- I think you can do it, and I'm glad to see someone trying to meet this usecase.

replies(1): >>45733720 #
2. dr-j ◴[] No.45733720[source]
Appreciate the detailed look. A few clarifications and immediate fixes: • Concepts pie: noted. It’s a minor visualizer and not a current priority; there are dozens of other charts in the Dlog Lab tab. I’ll queue a fix, but I’m focusing elsewhere first. • Privacy policy: you’re right—OpenAI is a third-party processor. I’m correcting the policy to say exactly when data is sent, what is sent, and under what controls (Enterprise API with no training/retention). I’ll also add a simple data-flow diagram. • Local vs cloud: journals live on-device; the SEM runs locally. Scoring only happens when you explicitly choose to score—there’s no background upload. I’m adding a per-journal “Include in Coach analyses” toggle, and a simple “Remove names” anonymizer (ships in the next few days) so names are stripped before any scoring call. • BYO endpoint: not on the near-term roadmap. I’m prioritizing clear privacy controls and product focus over supporting custom model URLs right now. • Copy/video: I’ll tighten the site copy to lead with privacy and analysis, and make the local vs cloud boundary crisp.

Thanks for pushing on clarity—fixes are in motion.