the thing that i think means that the halving series will stick indefinitely is its simplicity. you can see in that consensus function that specifies the reward for a block that it's extremely simple, even a non-programmer would have a decent chance of understanding it, and this also means that when new clients are built for the protocol, errors in consensus are unlikely.
integer math that does transcendentals is easy to get wrong, compared to simple bitshifts. on a hardware level, these bit shifts are done by the divider unit, they can be expressed as x/2 and the actual circuit that does the operation is the same, it's one of the primary operations done by dividers - when they see a power of two (which can be identified by the fact that only one bit is set) they take a shortcut, so instead of having a separate bit-shift unit, it's part of the divider circuit.
division is the most time-expensive arithmetic operation that processors do, because there is very few shortcuts to an iterative process. i created a proof of work that was optimized for CPUs by using division on very large numbers (like, kilobytes sized numbers). there would be no such thing as an ASIC for such a proof because the type of IC that already has this operation most optimized is a CPU arithmetic divider unit. it takes up literally about 1/4 of the die of a typical CPU, that's how expensive the operation is.
hash functions use a little bit of division in them, but mostly they work by creating a function that uses a lot of multiplication and addition to overflow bits, and then these bits are lost, creating the "trap door" that makes them easy to compute, but very difficult if not impossible to reverse. asymmetric cryptography (signatures) exploit this property as well, and supposedly, this is what the big deal about quantum computers is - that they can perform an algorithm with far less cycles that brute forces the reversal of a public key back to the secret key that it is generated from, based on the discrete log problem.
which brings me back to the power law pattern for an exponential decay of block reward to smooth it out. it is a complex, and expensive operation, involving a lot of division operations. complex algorithms are more likely to hide bugs that may not manifest except in rare cases, specific numbers or factors that can cause a problem and that would then lead to chain splits. even while using fixed point numbers the operation could be badly coded and mostly work, until it doesn't.
so, yeah, i agree. the halving cycle is probably better, because it achieves a form of exponential decay (based on powers of 2) with an algorithm that is basically practically impossible to get wrong.