Artificial intelligence has become an essential tool in French businesses. From document translation and decision support to the automation of complex tasks, its uses are multiplying, often in haste and without a clear framework. But behind these apparent productivity gains, very real problems are emerging. Internal security services are now raising the alarm about practices that directly expose data, strategy, and sometimes even the very survival of organizations. In a recent report on the risks of economic interference, the DGSI (General Directorate for Internal Security) describes several situations experienced by French companies. They share a central theme: the rapid adoption of AI tools, often consumer-grade, without a real understanding of the medium- and long-term consequences.
One of the first risks identified concerns the handling of sensitive data
In some companies, employees have developed the habit of using online artificial intelligence tools to translate or rephrase professional documents. Contracts, internal memos, technical reports, and financial information have thus been copied and transmitted to external platforms without prior approval from management or legal oversight. These practices, perceived as harmless, have led to the unintentional exposure of strategic information outside the company. The problem is compounded by the very nature of many AI services, which exploit user-provided content to improve their models. The transmitted data may be stored on servers located abroad, subject to non-European legislation, sometimes incompatible with French requirements regarding confidentiality and the protection of trade secrets. Once the information is disseminated, no internal corrective action truly allows for regaining control.
Beyond the data, the DGSI also points to a growing risk of decision-making dependence
In some rapidly growing organizations, AI tools have been used to assess business partners, analyze their creditworthiness, reputation, and associated legal risks. Due to time and resource constraints, these recommendations were sometimes followed without thorough human review. Strategic decisions then relied almost exclusively on automated analyses. This excessive delegation weakens corporate governance. AI systems produce results based on statistical probabilities, which may contain biases, approximations, or errors. Their internal workings remain largely opaque, making it difficult to fully understand the conclusions they propose. Without critical scrutiny, management can lose control of decisions that will have a lasting impact on the future of their organization.
Another danger, more spectacular and just as worrying, concerns the fraudulent uses of AI.
Security services are reporting attempted scams using speech and image synthesis technologies. In a recent case, the manager of an industrial site received a video call seemingly from his boss. The face, voice, and behavior appeared credible. However, the urgent request for a funds transfer raised suspicions, preventing financial fraud. Analysis confirmed that it was a deepfake generated by artificial intelligence. These scenarios illustrate a now well-established reality: AI is no longer just a performance tool; it is also becoming a source of vulnerabilities. Without a clear framework, without training for teams, and without controls on its use, it can expose trade secrets, weaken decision-making, and pave the way for sophisticated manipulation. For French companies, the question is no longer whether to use AI, but how to integrate it without jeopardizing their security, sovereignty, and credibility.