Recent scientific studies have highlighted significant differences in the way men and women appropriate and use new technologies.
For example, if we take GPS navigation applications such as Waze or Google Maps, research shows that men on average put greater trust in the recommendations provided by these tools, as well as in their own navigation skills, while women tend to check the information more themselves[1][2], while focusing on spatial landmarks.
Might we suggest an original hypothesis: that these findings could be applied to the use of AI, especially in consulting?
In the absence of a study confirming or refuting this hypthesis, it’s reasonable to assume that similar behavioral differences could be observed when using AI tools by consultants.
Let’s imagine a scenario in which gender bias comes into play in the use of the technology. Here, we extrapolate the conclusions of studies on the use of navigation applications to the preparation of an appointment. The male consultant confidently relies on the recommendations generated by a predictive model to help define a strategy to adopt. The consultant, on the other hand, takes the time to challenge the results of the algorithm, to compare them with her experience and business intuition. The former sees AI as a decision-support tool, while the latter sees it as a support requiring a critical eye[3][4].
Beyond trust, priorities and expectations towards AI may also differ. Consultants would seek to use these technologies to provide their clients with tailor-made insights. Consultants would be more focused on the productivity and efficiency gains enabled by task automation[5][6].
Of course, these are just somewhat provocative hypotheses that have no scientific basis.
But one thing is certain: to get the most out of AI, consulting firms must be mindful of these potential biases and implement the necessary safeguards.
This begins with appropriate training to enable all consultants—men and women alike—to develop a thorough understanding of how algorithms function and where their limitations lie [7][8]. The key challenge is to foster a climate of reasoned trust: neither naïve nor overly skeptical of the tools, their outputs, their capabilities, or their constraints.
The composition of mixed and diverse project teams is another essential lever. By bringing together varied perspectives and sensibilities, we can encourage a more balanced and relevant use of AI, leveraging the power of complementary approaches [9][10].
Finally, the adoption of artificial intelligence must be accompanied by rigorous ethical reflection. Robust processes need to be established to audit algorithms, identify discriminatory biases, and ensure that the recommendations generated align with the company’s values [11][12].
By cultivating this “AI-Q” (artificial intelligence in everyday life) at both the individual and collective level, we can transform AI into a true asset—enhancing employee development, strengthening internal performance, and driving better outcomes for our clients.
Pierre Courrieu
Sources:
[1] https://www.scirp.org/journal/paperinformation?paperid=62732
[2] https://www.linkedin.com/pulse/consulting-40-how-ai-redefining-service-delivery-
[3] https://www.forbes.com/sites/forbesbusinesscouncil/2023/09/26/ai-in-management-consulting-emerging-solutions-and-a-path-forward/?sh=58d5e085744e
[4] https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
[5] https://www.nature.com/articles/s41599-022-01043-5
[6] https://innodata.com/best-approaches-to-mitigate-bias-in-ai-models/
[7] https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans
[8] https://link.springer.com/article/10.1007/s00146-023-01747-5
[9] https://www.alpha-sense.com/blog/trends/generative-ai-consulting/
[10] https://www.sciencedirect.com/science/article/pii/S0360131512003053
Publié le 30.05.24