In some situations, the “logical” block size can differ. For example, buffered writes use the page cache, which operates in PAGE_SIZE blocks (usually 4K). Or your RAID stripe size might be misconfigured, stuff like that. Otherwise they should be equal for best outcomes.
In general, we want it to be as small as possible!
If you have bigger files, then having bigger blocks means less fixed overhead from syscalls and NVMe/SATA requests.
If your native device block size is 4KiB, and you fetch 512 byte blocks, you need storage side RAM to hold smaller blocks and you have to address each block independently. Meanwhile if you are bigger than the device block size you end up with fewer requests and syscalls. If it turns out that the requested block size is too large for the device, then the OS can split your large request into smaller device appropriate requests to the storage device, since the OS knows the hardware characteristics.
The most difficult to optimize case is the one where you issue many parallel requests to the storage device using asynchronous file IO for latency hiding. In that case, knowing the device's exact block size is important, because you are IOPs bottlenecked and a block size that is closer to what the device supports natively will mean fewer IOPs per request.