SHA announces demo version available on line

We are pleased to announce that INTRA’s Demo Version is now available online.

INTRA enables organizations to expand and centralize knowledge, ensuring it is effectively applied across operations, training, and quality management.

By transforming knowledge into actionable insights, INTRA helps teams improve consistency, operational efficiency, and customer experience, while maintaining control over processes and outcomes.

Request access and explore INTRA’s key features through our Demo section.

For more information, please contact mc@sha-saas.com

Checkmate for Customer Service? When Knowing the Rules Is No Longer Enough!

Comparing a game of chess with a customer service interaction may seem unexpected at first. Yet, when you look closely at the structure and progression of both, the analogy becomes surprisingly insightful.

While their objectives differ radically—checkmate vs. customer satisfaction and problem resolution—both follow a similar escalation in complexity as the interaction unfolds.


Conceptual Parallels

Chess GameCall Center InteractionMeaning / Analogy
Openness strategyCall opening / reception.Set the tone and take control from the start.
Tactical combinationHandling of objections.Quick thinking to turn things around.
Late-game accuracyClosing of the call.Ensure resolution and satisfaction before finishing.
SacrificeOffer compensation or a gesture of goodwill.Short-term loss for long-term gain (loyalty or retention).
CheckmateCustomer satisfaction and resolutionAchieve the desired result in an efficient manner.
Critical errorCommunication error / violation of rulesA costly mistake that affects the results.
Pat (blocking)Deadlock / escalationNeither side achieved its goal.
Time pressureHigh call volume periodsDecisions under pressure; Compromise between efficiency and precision.

Now that the parallel between the 2 activities is clarified, the behavior of the AI on these interactions becomes interesting to observe:

An illustrative example of the limitations of LLMs (large language models) comes from documented experiments with chess.

 In March 2024, Chess.com held a showdown between ChatGPT and Google’s Gemini, where both systems could perfectly explain the rules of chess when asked directly, but then violated those same rules repeatedly during the game. Both bots constantly attempted to make illegal moves, and when they were informed of the error, they continued to come up with invalid moves.

Nikola Greb, an NLP data scientist and former ELO 2000+ junior chess champion, played several games against ChatGPT-4 in January 2024 and documented that the model played “like a grandmaster” in the opening first moves, but deteriorated significantly as the game progressed. ChatGPT-4 began to hallucinate, coming up with impossible movements even after being warned. Greb concluded that the overall rating of the system was below 1500, and observed something crucial: “No implicit rule learning has taken place – ChatGPT-4 still hallucinates at chess, and continues to hallucinate after the warning about hallucination. This is something that cannot happen to a human.

This disconnect between what an LLM can “say” and what it can “do” reveals a fundamental limitation: they don’t have real mental models of the world. In the context of customer service, this means that a bot can perfectly recite company policy but apply it incorrectly in specific situations, or it can explain how a product works without being able to diagnose a problem with it.

The Chatbot Chess Tournament 2025

In January 2025, a chatbot chess tournament aired on the GothamChess channel pitted professional chess engine Stockfish against seven generative AI chatbots, including ChatGPT, Google’s Gemini, and X’s Grok. The results were exactly what you would expect when language models try to play chess: decent opening moves followed by increasingly chaotic attempts to circumvent the laws of the game. The Snapchat chatbot decided that the pawns could move sideways like a tower, and when the error was reported, it repeatedly refused to continue saying “I’m sorry. I can’t engage in such a conversation. Let’s keep our conversation respectful.”

The problem of memory and context

LLMs have strict memory limits. While newer models offer wider windows of context, they still treat each conversation as relatively isolated. This means they can “forget” crucial information provided at the beginning of a long conversation, forcing customers to repeat themselves.

In one of the following articles, we will see how to avoid putting the customer in failure while making the best use of the undeniable capabilities of AI…

Why are companies adopting AI in customer service? A necessary reflection

Artificial intelligence has burst into customer service departments with breakneck speed. Chatbots, virtual assistants and automated systems are multiplying on websites and applications, promising to revolutionize the user experience. But it is worth asking: are companies making this decision based on solid evidence or simply following a trend?

The seduction of immediate savings

Let’s not fool ourselves: the economic factor is the big elephant in the room. Automating customer service can reduce operational costs dramatically. A chatbot doesn’t need vacations, doesn’t ask for salary increases, and can serve thousands of users simultaneously. For CFOs, the equation seems simple.

However, this short-term view ignores hidden costs: the development and implementation of robust AI systems, ongoing maintenance, the formation of hybrid human-machine teams, and most importantly, the reputational cost when technology fails or frustrates customers.

The Corporate FOMO Effect

There is a clear “fear of being left behind” in the business world. When competitors announce their advances in AI, boards of directors push to “do something with artificial intelligence.” AI has become a marketing element, a box to tick in the annual presentation of results.

This reactive, rather than strategic, adoption explains why so many implementations seem botched: confusing interfaces, bots that don’t understand basic queries, or systems that frustrate more than they help. Technology is deployed not because it solves real problems, but because “you have to be there”.

Did anyone ask customers?

Here we come to the most delicate point. How many companies have conducted serious studies on what their customers really prefer before automating? Anecdotal evidence suggests that many users still greatly value human touch, especially in complex or emotionally charged situations.

No one wants to navigate endless automated menus when they have an urgent problem. No one enjoys repeating their query three times to a bot that doesn’t understand the context. And yet, these experiences multiply every day.


The paradox is that it is a known fact (see studies) that clients prefers to deal with their pairs (human) when they are available, if they’re not available for any reason, customers seem to cope with the AI alternatives But the core paradox is that customers keep purchasing products and services without clearly selecting the ones with their preferred media of support: human, is it because there is very little offering on the market promoting customer support “made by humans”? Should this “label” be developed and promoted.


The efficiency argument… for whom?

Companies talk about “improving efficiency”, but efficiency for whom? A system can be efficient for the business (it processes more queries with fewer resources) and simultaneously inefficient for the customer (it requires more time, generates more frustration).

The real question is: are we measuring success correctly? If the metrics are purely internal (number of queries processed, average response time, cost reduction), we are optimizing for the business, not for the customer.