"MPT-7B looks to be super competitive across the board, even beats 13B models. This LLM is trained on 1T tokens of text and code curated by MosaicML. The model is fine-tuned to also work with a context length of 65k tokens!"
https://twitter.com/hardmaru/status/1654790008925220866?t=_eXP4ZcjdMd_hpPLMdAVZA&s=19