SHA expands the capabilities of its INTRA platform by introducing supervised fine-tuning of Large Language Models (LLMs). This new feature complements the already existing unsupervised fine-tuning, giving organizations a dual approach to optimize AI performance.
This dual approach empowers INTRA to deliver:
- Accurate answers aligned with proprietary knowledge bases.
- Adaptability to sector-specific domains.
- Progressive refinement through user-driven supervised training.
With unsupervised fine-tuning, INTRA adapts open LLMs to the client’s proprietary corpus, enriched with domain-specific knowledge aligned with the nature of the business. This ensures that models understand the terminology, documents, and internal processes of each organization.
Now, with supervised fine-tuning, INTRA goes one step further: the system learns directly from the questions posed by users and the answers generated by the LLMs. This iterative training process refines the model, increasing accuracy and relevance in real-world interactions.
“By combining supervised and unsupervised fine-tuning, INTRA enables organizations to build truly adaptive AI: trained on their proprietary corpus and continuously refined through real user interactions,” said José Luis Caaveiro, CEO of SHA.
For more information, please contact: mc@sha-saas.com

