←back to thread

56 points bmadduma | 1 comments | | HN request time: 0.405s | source

Working on automating small business finance (bookkeeping, reconciliation, basic reporting).

One thing I keep noticing: compared to programming, accounting often looks like the more automatable problem:

It’s rule-based Double entry, charts of accounts, tax rules, materiality thresholds. For most day-to-day transactions you’re not inventing new logic, you’re applying existing rules.

It’s verifiable The books either balance or they don’t. Ledgers either reconcile or they don’t. There’s almost always a “ground truth” to compare against (bank feeds, statements, prior periods).

It’s boring and repetitive Same vendors, same categories, same patterns every month. Humans hate this work. Software loves it.

With accounting, at least at the small-business level, most of the work feels like:

normalize data from banks / cards / invoices

apply deterministic or configurable rules

surface exceptions for human review

run consistency checks and reports

The truly hard parts (tax strategy, edge cases, messy history, talking to authorities) are a smaller fraction of the total hours but require humans. The grind is in the repetitive, rule-based stuff.

Show context
missedthecue ◴[] No.46293790[source]
"It’s verifiable. The books either balance or they don’t. Ledgers either reconcile or they don’t. There’s almost always a “ground truth” to compare against (bank feeds, statements, prior periods). It’s boring and repetitive. Same vendors, same categories, same patterns every month. Humans hate this work. Software loves it."

These are all true statements, but all of those things are solvable with classic software. Quickbooks has done this for decades now. The parts of accounting that aren't solvable with classic computing are generally also not solvable by adding LLMs into the mix.

replies(3): >>46294069 #>>46296551 #>>46302613 #
1. Kiboneu ◴[] No.46296551[source]
This conviction doesn't seem to acknowledge the problem at scale. Decades of great UI development will still leave out edge cases that users will need to use the tool for. This happens fundamentally because the people who need to use the tools are not the people who make them, they rarely even talk to each other (instead they are "studied" via analytics).

When /humans/ bring up the idea of integrating LLMs into UIs, I think most of the time the sentiment comes from legitimate frustration about how the UI is currently designed. To be clear, this is a very different thing than a company shimming copilot into the UI, because the way these companies use LLMs is by delegating tasks away from users rather than improving their existing interfaces to complete these tasks themselves. There are /decades/ of HCI research on adaptive interfaces that address this, in the advent of expert systems and long before LLMs -- it's more relevant than ever, yet in most implemenations it's all going out the window!

My experience with accounting ^H^H^H^H^H^H^H^H^H^H bookkeeping / LLMs in general resonates with this. In gnu cash I wanted to bulk re-organize some transactions, but I couldn't find a way to do it quickly through the UI. All the books are kept in a SQL db, I didn't want to study the schema. I decided to experiment by getting the LLM to emit a python script that would make the appropriate manipulations to the DB. This seemed to take the best from all worlds -- the script was relatively straightforward to verify, and even though I used a closed source model, it had no access to the DB that contained the transactions.

Sure, other tools may have solved this problem directly. But again, the point isn't to expect someone to make a great tool for you, but to have a tool help you make it better for you. Given the verifiability, maybe this /is/ in fact one of the best places for this.