I wonder if the infinity of symbolic state spaces like PDDL is bigger than the infinity of learned feature representations, like what you get out of an embedding model.
Of course I’m talking about PDDL without floats because that’s obviously equal to or bigger.
Whether this is true or not has an impact on the fundamental generalizability of symbolic vs statistical models