One can recreate results with a simpler transformer architecture, without multiple levels. The trick is in training setup, and the iterative Q learning loss, not the hierarchy and the recursion via latent space.

Reply to this note

Please Login to reply.

Discussion

No replies yet.