allocation/deallocation from/to the free store (heap)
shall not occur after initialization.
This works fine when the problem is roughly constant, as it was in, say, 2005. But what do things look like in modern AI-guided drones?I can't think of anything about "modern AI-guided drones" that would change the fundamental mechanics. Some systems support very elastic and dynamic workloads under fixed allocation constraints.
In this way you can use pools or buffers of which you know exactly the size. But, unless your program is always using exactly the same amount of memory at all times, you now have to manage memory allocations in your pool/buffers.
The overwhelming majority of embedded systems are desired around a max buffer size and known worst case execution time. Attempting to balance resources dynamically in a fine grained way is almost always a mistake in these systems.
Putting the words "modern" and "drone" in your sentence doesn't change this.
Where dynamic allocation starts to be really helpful is if you want to minimize your peak RAM usage for coexistence purposes (eg you have other processes running) or want to undersize your physical RAM requirements by leveraging temporal differences between different parts of code (ie components A and B never use memory simultaneously so either A or B can reuse the same RAM). It also does simplify some algorithms and also if you’re ever dealing with variable length inputs then it can help you not have to reason about maximums at design time (provided you just correctly handle an allocations failure).
These systems have limits but they are extremely high and in the improbable scenario that you hit them then it is a priority problem. That design problem has mature solutions from several decades ago when the limits were a few dozen simultaneous tracks.