I've experimented with smaller models, such as 7 billion and 13 billion. When comparing Falcon (13 million parameters) to Lama (13 billion parameters), Falcon clearly outperforms it.
However, caution is necessary since we are still in the early stages of development, with much ongoing progress.