Most active commenters
  • ehutch79(4)

←back to thread

200 points dcu | 18 comments | | HN request time: 5.407s | source | bottom
1. fkyoureadthedoc ◴[] No.44456481[source]
> Another important file is _users.csv which contains user credentials and roles. It has the same format as other resources, but with a special _users collection name. There is no way to add new users via API, they must be created manually by editing this file:

    admin,1,salt,5V5R4SO4ZIFMXRZUL2EQMT2CJSREI7EMTK7AH2ND3T7BXIDLMNVQ====,"admin"
    alice,1,salt,PXHQWNPTZCBORTO5ASIJYVVAINQLQKJSOAQ4UXIAKTR55BU4HGRQ====,
> Here we have user ID which is user name, version number (always 1), salt for password hashing, and the password itself (hashed with SHA-256 and encoded as Base32). The last column is a list of roles assigned to the user.

I haven't had to handle password hashing in like a decade (thanks SSO), but isn't fast hashing like SHA-256 bad for it? Bcrypt was the standard last I did it. Or is this just an example and not what is actually used in the code?

replies(4): >>44456509 #>>44457381 #>>44457415 #>>44457642 #
2. reactordev ◴[] No.44456509[source]
Indeed bcrypt is preferred but this is just a simple backend. My first ick was using CSV as storage as opposed to golang’s builtin SQLite support.

A SQLite connection can be made with just a sqlite://data.db connection string.

replies(1): >>44456539 #
3. jitl ◴[] No.44456539[source]
Golang does not have built in SQLite. It has a SQL database abstraction in the stdlib but you must supply a sqlite driver, for example one of these: https://github.com/cvilsmeier/go-sqlite-bench

However using the stdlib abstraction adds a lot of performance overhead; although it’ll still be competitive with CSV files.

replies(1): >>44456608 #
4. reactordev ◴[] No.44456608{3}[source]
Ok, one additional dependency to your go.mod - big deal. And by builtin I was referring to the database/sql module which was designed for this.
replies(3): >>44456835 #>>44456856 #>>44457711 #
5. fkyoureadthedoc ◴[] No.44456835{4}[source]
maybe this is why they used sha-256 too, it's in the stdlib whereas bcrypt is a package (even if "official")
replies(1): >>44456932 #
6. gtufano ◴[] No.44456856{4}[source]
Most of the more common SQLite implementations for go require CGO and this is a pretty steep request, it's definitely more than a line in go.mod
7. ncruces ◴[] No.44456932{5}[source]
The standard lib has pbkdf2 though.

I'm guessing the goal is that the file can be managed more easily with a text editor and some shell utils.

8. ehutch79 ◴[] No.44457381[source]
If it's in the examples, it WILL make it to someone's production code
9. zserge ◴[] No.44457415[source]
Like others have guessed, I limited myself to what Go stdlib offers. Since it's a personal/educational project -- I only wanted to play around with this sort of architecture (similar to k8s apiserver and various popular BaaSes). It was never meant to run outside of my localhost, so password security or choice of the database was never a concern -- whatever is in stdlib and is "good enough" would work.

I also tried to make it a bit more flexible: to use `bcrypt` one can provide their own `pennybase.HashPasswd` function. To use SQLite one can implement five methods of `pennybase.DB` interface. It's not perfect, but at the code size of 700 lines it should be possible to customise any part of it without much cognitive difficulties.

replies(1): >>44459795 #
10. bob1029 ◴[] No.44457642[source]
> isn't fast hashing like SHA-256 bad for it

Fast hashing is only a concern if your database becomes compromised and your users are incapable of using unique passwords on different sites. The hashing taking forever is entirely about protecting users from themselves in the case of an offline attack scenario. You are burning your own CPU time on their behalf.

In an online attack context, it is trivial to prevent an attacker from cranking through a billions attempts per second and/or make the hashing operation appear to take a constant amount of time.

replies(1): >>44459120 #
11. jitl ◴[] No.44457711{4}[source]
Well the project goal seems to be extreme minimalism and stdlib only, and the choice of human readable data stores and manually editing the user list suggests a goal is to only need `vim` and `sha256sum` for administration
12. ehutch79 ◴[] No.44459120[source]
Users don’t use unique passwords. Don’t expect them to.
replies(1): >>44464498 #
13. Throwaway123129 ◴[] No.44459795[source]
I think adding `golang.org/x/crypto` as a second dependency is fine. It's basically stdlib at this point (though slightly less stability guarantees).
14. sneak ◴[] No.44464498{3}[source]
For a backend you can enforce unbruteforceable API keys that are long and random.
replies(1): >>44464731 #
15. ehutch79 ◴[] No.44464731{4}[source]
What does that look like, and how does that prevent a compromise exposing users whose non-unique passwords are stored in a known broken hash?
replies(1): >>44465374 #
16. sneak ◴[] No.44465374{5}[source]
if you enforce long random server-assigned api keys they are guaranteed to be unique.

you don’t need bcrypt or pbkdf with api keys, as they are not passwords. they are high entropy and unique and long - unlike passwords.

replies(1): >>44466322 #
17. ehutch79 ◴[] No.44466322{6}[source]
Ok, but if the passwords are stored in a broken sha hash, and the server is compromised, how do api keys prevent users who use “packers123” for every site from having their passwords exposed?
replies(1): >>44467509 #
18. bob1029 ◴[] No.44467509{7}[source]
I think the more interesting conversation goes like:

How many CPU seconds should I burn for every user's login attempt to compensate for the remote possibility that someone steals the user database? Are we planning to have the database stolen?

Even if you spin for 30 minutes per attempt, someone with more hardware and determination than your enterprise could eventually crack every hash. How much money is it worth to play with a two-layer cake of unknowns?

Has anyone considered what the global carbon footprint is of bitcoin mining for passwords? How many tons of CO2 should be emitted for something that will probably never happen? This is like running the diesel generators 24/7/365 in anticipation of an outage because you couldn't be bothered to pay for a UPS.