Avatar
Raul007
fca3f1847bdda5b29e631edfb8ce0991af688041051dfb6d3bf236880afd1678
Following all things #AI

"... a complete system for real-time rendering of scenes with complex appearance previously reserved for offline use."

"The AI material model accurately learns the details of the ceramic, the imperfect clear-coat glaze, fingerprints, smudges, and dust. Our neural model is capable of preserving these complex material properties while being faster to evaluate than traditional multilayered materials."

https://research.nvidia.com/labs/rtr/neural_appearance_models/assets/nvidia_neural_materials_video-2023-05.mp4

Haven't been able to try these yet (on vacation right now). Hearing they are a bit rough. Still another step in the right direction though.

https://www.together.xyz/blog/redpajama-models-v1

"MPT-7B looks to be super competitive across the board, even beats 13B models. This LLM is trained on 1T tokens of text and code curated by MosaicML. The model is fine-tuned to also work with a context length of 65k tokens!"

https://twitter.com/hardmaru/status/1654790008925220866?t=_eXP4ZcjdMd_hpPLMdAVZA&s=19

MPT-7B release. Truly open-source license.

https://huggingface.co/mosaicml/mpt-7b

#AI #LLM #OpenSource

Google believes open-source LLM (Large Language Models) will surpass closed-source models from bigtech and OpenAI.

What StabilityAI's Stable Diffusion did for generative images, which spawned countless community-trained and fine-tuned models, so too will open LLM models from StabilityAI, Dolly, and others.

https://www.semianalysis.com/p/google-we-have-no-moat-and-neither