France’s AI strategy
The national strategy is largely based on recommendations from the report "For a meaningful artificial intelligence: towards a French and European strategy" (March 2018), released the day before. The report is the result of a six-month mission by Cédric Villani — French mathematician, Fields Medal winner and Member of Parliament — and his team.
The plan was introduced at the end of the day-long conference, “AI for Humanity”, at the Collège de France, in Paris. The programme included keynote talks by leading AI researchers and entrepreneurs, and representatives of the research sector (Justine Cassell, Laurence Devillers, Stéphane Mallat, Stuart Russel, Yann Le Cun, Noriko Araï, Cathy O’Neil, Ran Balicer, Latanya Sweeney, Fei Fei Li, Grégory Renard, Marie-Paule Cani, Frédérique Vidal, Antoine Petit, Sam Altman, among others).
At the same time, France Stratégie, the strategy department attached to the French Prime Minister, released a report on AI and the future of work (in French only). This is in keeping with the France intelligence artificielle report released one year earlier (March 2017) and the launch of #FranceIA, in March 2017.
Key proposals of the Villani Report on AI
1. Encourage companies to pool and share their data
The government must encourage the creation of data commons and support an alternative data production and governance model based on reciprocity, cooperation and sharing. The goal is to boost data sharing between actors in the same sector.
The government must also encourage data sharing between private actors, and assist businesses in this respect. It must organize for certain data held by private entities to be released on a case-by-case basis, and support data and text mining practices without delay.
2. Create data that is in the public interest
Most of the actors heard by the mission were in favour of progressively opening up access to some data sets on a case-by-case and sector-specific basis for public interest reasons. This could be in one of two ways: by making the data accessible only to the government, or by making the data more widely available, for example to other economic actors.
3. Support the right to data portability
The right to data portability is one of the most important innovations in recent French and European texts. It will give any individual the ability to migrate from one service ecosystem to another without losing their data history.
This right could be extended to all citizen-centred artificial intelligence applications. In this case, it would involve making personal data available to government authorities or researchers. This would be beneficial for three reasons:
- It would encourage the creation of new databases for use by public services;
- It would give new meaning to the right to portability by supporting improved data circulation under the exclusive control of citizens;
- It could be implemented immediately after the European data protection regulation enters into force, without the need for new constraints being introduced for private actors.
The four sectors are: health, transport, the environment and defence and security.
1. Implement sector-specific policy focusing on major issues
Industrial policy must focus on the main issues and challenges facing our era, including the early detection of pathologies, P4 medicine, medical deserts and zero-emission urban mobility. These issues could be identified by sector-specific commissions in charge of publicizing and running activities for their ecosystems.
2. Test sector-specific platforms
To support innovation, sector-specific platforms must be created to compile relevant data and organize its capture and collection; to provide access to large-scale computing infrastructures suitable for AI; to facilitate innovation by creating controlled environments for experiments; and to enable the development, testing and deployment of operational and commercial products.
3. Implement innovation sandboxes
The AI innovation process must be streamlined by creating testing areas (sandboxes) with three characteristics:
- a temporary reduction of the regulatory burden to help actors test innovations;
- support to help actors shoulder their obligations;
- and resources to run experiments in “real-life” conditions
The goal of these sandboxes will be to facilitate the testing, iterative design and deployment of AI technologies in coordination with future users.
- In the health field, predictive and personalized medicine will make it possible to monitor patients in real time, and improve the detection of anomalies in electrocardiograms.
- In the transport field, the development of the driverless car is a key industrial priority.
- In the defence and security field, AI could be used to detect and even respond to cyberattacks that cannot be detected by humans, and facilitate the analysis of multimedia data.
- In the environmental field, the development of monitoring tools for farmers will pave the way for smart agriculture benefiting the entire agrifood chain.
1. Create interdisciplinary AI institutes (3IA) in selected public higher education and research establishments.
These institutes must be spread throughout France and cover a specific application or field of research.
2. Allocate appropriate resources to research, including a supercomputer designed especially for AI applications in partnership with manufacturers.
In addition, researchers must be given facilitated access to a European cloud service.
3. Make careers in public research more attractive by boosting France’s appeal to expatriate or foreign talents: increasing the number of masters and doctoral students studying AI, increasing the salaries of researchers and enhancing exchanges between academics and industry.
New training models must be planned and tested to prepare for these professional transitions. Three main proposals have been put forward:
1. Create a public laboratory on the transformation of work
The creation of a public laboratory on the transformation of work will encourage reflection on the ways in which automation is changing occupations. It will also make it possible to test tools supporting professional transitions, especially for those likely to be most affected by automation.
2. Develop complementarity between humans and machines
To improve future working conditions, reflections must focus on developing a “complementarity index” for businesses, and including all aspects of the digital transition in social dialogue. This could result in a legislative project on working conditions in the automated era.
3. Test new funding methods for vocational training
This testing would make it possible to address AI-related changes to value chains. Currently, businesses fund the vocational training of their own employees. However, for their digital transformation, they often call on other actors who capture value and play a key role in automating tasks but do not help fund vocational training for employees. New funding methods must therefore be tested through social dialogue.
1. The government must use AI to support the ecological transition.
Firstly, by creating a research centre focusing on AI and the ecological transition. This centre could contribute to projects such as Tara Oceans, which is at the crossroads of life sciences and ecology. Secondly, by implementing a platform to measure the environmental impact of smart digital tools.
2. As part of this approach, it must help AI become less energy-intensive by supporting the ecological transition of the European cloud industry.
3. Lastly, ecological transition must go hand in hand with the liberation of “ecological data”.
AI can help reduce our energy consumption and restore and protect nature – for instance, by using drones to carry out reforestation, or by mapping living species through image recognition technology.
In the long term, artificial intelligence technologies must be explainable if they are to be socially acceptable. For this reason, the government must take several steps:
1. Develop algorithm transparency and audits
- by developing the capacities necessary to observe, understand and audit their operation. To do so, a group of experts must be created to analyse algorithms and databases, and research on explainability must be supported to encourage civil society to carry out its own evaluations.
- This means focusing on three areas of research: producing more explainable models, producing more interpretable user interfaces, and understanding the mechanisms at work in order to produce satisfactory explanations.
2. Consider the responsibility of AI actors for the ethical issues at stake:
- By including ethics in training for AI engineers and researchers.
- By carrying out a discrimination impact assessment, along the lines of France’s privacy impact assessment (PIA), to encourage AI designers to consider the social implications of the algorithms they produce.
3. Create a consultative ethics committee for digital technologies and AI, which would organize public debate in this field.
This committee would have a high level of expertise and independence. Indeed, 94% of those interviewed considered that the development of AI in our society should be regularly addressed in public debates.
4. Guarantee the principle of human responsibility, particularly when AI tools are used in public services.
This includes setting boundaries for the use of predictive algorithms in the law enforcement context. It also means extensively discussing any development of lethal autonomous weapons systems (LAWS) at the international level, and creating an observatory for the non-proliferation of these weapons.
1. Ensure that 40% of those enrolled in digital engineering courses are women by 2020
This recommendation was supported by more than 85% of those interviewed. To attain this goal, an incentive policy could be implemented. This initiative must be accompanied by a policy to train and raise awareness of diversity issues among educators in the AI industry.
To address the growing inaccessibility of public services and rollback of rights caused by dematerialization, administrative procedures must be modified and mediation skills enhanced.
2. Modify administrative procedures and enhance mediation skills
The government could launch an automated system managing administrative procedures to help individuals better understand administrative rules and how they apply to their personal situations. At the same time, new mediation tools must be implemented to provide support to those who need it.
3. Support AI-based social innovations
The government must support social innovation programmes based on AI (dependency, health, social action and solidarity) to ensure that technological advances also benefit those working in the social action field.
Stéphanie Dos Santos, Deputy Science Attachée, French Embassy in the United Kingdom.
Published on 05 April 2018