Global Feed Post Login
Replying to Avatar Raul007

"MPT-7B looks to be super competitive across the board, even beats 13B models. This LLM is trained on 1T tokens of text and code curated by MosaicML. The model is fine-tuned to also work with a context length of 65k tokens!"

https://twitter.com/hardmaru/status/1654790008925220866?t=_eXP4ZcjdMd_hpPLMdAVZA&s=19

92
nobody 2y ago

or you can just converse with one on the front end -

Reply to this note

Please Login to reply.

Discussion

No replies yet.