GCs typically fall into two categories:
1. Reference counting - tracks how many references point to each object. When references are added or removed, the count is updated. When it hits zero, the object is freed immediately. This places overhead on every operation that modifies references.
2. Mark and sweep - objects are allocated in heap regions managed by the GC. Periodically the GC traces from roots (stack, globals) to find all live objects, then frees the rest. Usually generational: new objects in a nursery/gen0 are collected frequently, survivors are promoted to older generations collected less often.
In general reference counting is favoured for predictable latency because you’re cleaning up incrementally as you go. Total memory footprint is similar to manual memory management with some overhead for counting refs. The cost is lower throughput as every reference change requires bookkeeping (see Swift ARC for a good example).
Mark and sweep GCs are favoured for throughput as allocations and reference updates have zero overhead - you just bump a pointer to allocate. When collection does occur it can cause a pause, though modern concurrent collectors have greatly reduced this (see Java G1GC or .NET for good examples). Memory footprint is usually quite a bit larger than manual management.
In the case of Clojure which in addition to being a LISP also uses immutable data structures, there is both object churn and frequent changes to the object graph. This makes throughput a much larger concern than a less allocation heavy language - favouring mark and sweep designs.