Not more than ONCE :D yes, it makes sense, I sorta got the inkling that uniqueness was something about the concept but not the word "once".
On a side note, since malice and error are indistinguishable without evidence of intent, you could say that pedophiles represent high entropy humans, that replicate their entropy onto immature other humans, that replication part being the most fundamental element of their intent (normalization).
Random values in messages are definitely needed for optimal security, unless there is inherent entropy in the message. 32 bits of timestamp don't constitute strong entropy. Maybe if it was nanoseconds and 64 bit we'd be starting to talk about something with low collision potential.
Keep in mind that this encryption is not just for the relatively low volume of human readable text. It will inevitably be needed to encrypt massive amounts of data and the bigger the amount of data, the more likely is collision. For this reason, GCM, for example, should not use the same secret on more than 4gb of data. This is directly related to the permutations created by the entropy of the data you are encrypting and the size of the nonce, which for GCM is 12 bytes, or 96 bits.
SSL uses 128 and 256 bit sizes for the secrets because reversing them starts to become unlikely to occur within the time window of which the security of the protocol must survive. The cheaper that computation gets, and the more effective systems of parallelisation becomes, the more likely we are going to see a need to scale up to 384 and 512 bits. For now, it is very safe to bet on 256.
There has been several recent instances of people messing around with dice roll entropy to generate bitcoin addresses and having them almost immediately hacked. Computing a private key by iterating through the counting number space will pick up a short seed with very few iterations.
Pasword Based Key Derivation Functions (is HKDF a term used in papers now?) purpose is to inccrease the amount of time required to test each candidate. It is like raising the difficulty on proof of work, it functions on the same basis as everything I've discussed previously except the number of repetitions and the expansion of the output values (PBKDFs like Argon/Argon2 include a memory usage factor that adds memory hardness to the derivation, similar to how the old Ethereum PoW used a huge key, which was essentially a type of salt, to lower the optimization potential).