🗂️ Navigation

Microsoft Orca 2

Teaching Small Language Models How to Reason.

Visit Website →

Overview

Orca 2 is a family of small language models (7B and 13B parameters) from Microsoft Research that are designed to have enhanced reasoning abilities, typically found only in much larger models. Orca 2 models are created by fine-tuning Llama 2 base models on high-quality synthetic data that teaches them various reasoning techniques. They are intended for research purposes to explore the capabilities of smaller LMs.

✨ Key Features

  • Enhanced reasoning abilities in small models
  • Taught to use different solution strategies for different tasks
  • Fine-tuned from Llama 2 base models
  • Available in 7B and 13B parameter sizes
  • Openly available for research purposes

🎯 Key Differentiators

  • Focus on teaching reasoning strategies to small models
  • Use of high-quality synthetic data for training

Unique Value: Demonstrates that smaller language models can achieve strong reasoning abilities through innovative training methods.

🎯 Use Cases (4)

AI research on reasoning in language models Tasks requiring complex, multi-step reasoning Math problem solving Text summarization

💡 Check With Vendor

Verify these considerations match your specific requirements:

  • Downstream applications without further safety analysis

🏆 Alternatives

Phi-3 Other small, reasoning-focused models

Offers a unique approach to improving the reasoning of small models, not just their knowledge.

💻 Platforms

Self-hosted

✅ Offline Mode Available

🔌 Integrations

Hugging Face Transformers

💰 Pricing

Contact for pricing
Free Tier Available

Free tier: Available for research purposes.

Visit Microsoft Orca 2 Website →