The model is the magic behind Muse. It is a large neural network with billions of parameters, calibrated by reading and learning from hundreds of billions of words sourced from web pages, books, scientific articles, and more. During this training, the model learns to model language accurately (i.e. to predict the next word of a text). Think about how, when sending text messages, you get suggestions trying to predict the next word you want to type. The variety, quality, and sheer amount of our curated training data result in universal language models, able to handle diverse tasks simply by reading and processing written instructions.
We offer 🤖 Models of different capabilities in various languages. Increasingly powerful models are slower and more expensive, but they can tackle more tasks more effectively.
Model sizes are named after constellations (from smallest to largest
lyra), followed by their language code (
For example, for applications in English, you can use
lyra-en, for French,
lyra-fr, and for German