BloggerPratical lifeTechnology

What ethics for artificial intelligence in health?

Will the computer and its algorithms become a medicine tool like any other?

Health has been identified as one of the priority areas for AI development by the Villani report. The race for health data and therefore launched. The sensitivity of these data raises ethical questions about their collection and processing by algorithms. The irreversible digitization of bodies, however, exceeds the way of exploiting these data in a path of care stricto sensu. If France really wants to stand out, it will have to guarantee that ethical and deontological considerations are integrated from the design of the tools until after the deployment phase and also be part of a global rather than a national perspective.

A health data ecosystem struggling to see the light of day

By 2019, our National Health Data System (NSDS) will bring together 450 terabytes (1012 bytes) of strategic information in a single database, while in the US this volume is already 150 exabytes (1018 bytes). This breakthrough will enable the analysis and improvement of the health of the population, particularly with regard to patients suffering from chronic diseases (the Modernization of Our Health System Act).

This is a real gold mine for both researchers and private companies but the recent formal notice of the National Fund for sickness insurance of employees by the CNIL for insufficient security of its national inter-regime system shows that this device is too fragile: the pseudonymisation of the data of the social insured is pointed at the finger as well as the procedures of safeguarding health data and this despite a very solid legal framework framing the collection, treatment and hosting of these data. Should we remind that the DMP (shared medical file) is still not in place after 14 years of wandering and that the SNDS is far from gathering information issued by all the medical teams met by the patient …

The state, the first customer of the digital transformation, is therefore largely dependent on the production of health and well-being data from its citizens to build this “data ecosystem” that will fuel artificial intelligence. The Villani report therefore envisages that everyone, in a citizen-friendly way, allows the State and local authorities to recover all their data in order to develop AI applications for public policy purposes, whether data related to user account or its connected objects.

This is a pure and simple diversion of the right to portability of personal data recognized by the European regulation that will be applicable next May (art.20). Even if the purpose of such sharing seems commendable in that it contributes to the general interest (protection of public health), the consequences will have to be clarified: what will be the counterparts for citizens? Will it be possible to refuse this sharing of ultrasensitive data with the State without incurring any consequences in terms of reimbursement for example, or even access to care?

The more massive the collection, the more security and confidentiality will have to be ensured in the face of misuse by banks, insurers and employers, but also given the risks of piracy. The $ 115 million fine imposed as a result of the security breach of the 78 million Americans who benefit from mutual health insurance Anthem must be sobering. Yet this dimension is ignored in the report on artificial intelligence …

Another point of contention is that if health data have a specific legal protection regime because of their sensitive nature, this is not the case for welfare data generated by connected objects (Internet of Things , or IoT). It is surprising, to say the least, that this report avoids such a reflection: it is precisely the analysis of these data that will offer personalized care because they are much more precise than a medical file and are moreover collected continuously (see the work of the National Consultative Ethics Committee).

The Cambridge Analytica scandal has “vaccinated” Internet users about the risks of unfair collection of sensitive data and the resulting abuses of their rights and fundamental freedoms, so vigilance will be more than appropriate regarding the partnerships envisaged with web giants (Google’s DeepMind, and Facebook AI).

The sandbox method finally chosen by France

Present abroad to regulate blockchain technology, this “test and learn” method comes from information systems, where it designates an environment in which potentially insecure code develops. Transposed to the regulatory field, it allows entrepreneurs to test their services and business models in a real environment (patients, professionals and hospitals) without having to follow certain legal constraints raised by the regulator. The experience is limited in time and number of clients which minimizes legal uncertainty while promoting investment.

This method can also be applied to the regulator who goes into start-up mode: the flexibility thus set up makes it possible to limit risks while supporting innovation. It is used in common law countries, but also in Russia, India and the United Arab Emirates (see “Playing in the sandbox” regulatory to regulate disruptive innovation: the case of the technology of the chain of blocks “).

The “sandbox” will test the potential of AI for diagnosis, treatment, but also clinical trials, traceability of drugs, reimbursement of care and funding of e-health projects. Ethically, ethically and legally, this method will clarify the responsibilities incurred in case of medical error and map the multiple assumptions of legal risks incurred by the use of health AI. The opening of the “sandbox” to more players, that is to say European or international players will be all the more beneficial for a global approach and not strictly national.

Towards more algorithmic transparency

Another major issue of AI health: the transparency of algorithms and the explanation of their operation.

The most common form of AI is deep learning, it is data driven and purely statistical. It is very useful for the analysis of shapes, images, texts when the database that feeds it is rich enough. For the moment, the tricolor champions of medical imaging feed their AI with databases … American!

However, in a sector as critical as that of health, not only must a maximum of data be collected, but it is also necessary to introduce rationality to explain the relevance of the results proposed by the AI. It is also this lack of transparency that is often criticized for the ethical dilemma of autonomous vehicles (must we save the pedestrian crossing red or risk endangering the passenger of the vehicle? CNIL, “How to enable ‘Man to keep the hand’ Report on the ethical issues of algorithms and artificial intelligence ‘).

The Villani report calls for the creation of an ethics committee that will audit these digital technologies and AI. When it comes to making a decision for a patient involving manipulating critical, confidential and highly protected data, the algorithm will have been audited prior to service deployment. More precisely, ethics will have to be respected at the stage of collecting health and well-being data (loyalty), then at the heart of the algorithm as a data management value throughout their life cycle and finally in the practices of the AI ​​in health by explaining the results produced in adequacy with the purpose of the collection (protection of the health).

The difficulty will be to project the algorithm in its operation in relation with other machines (which currently do not speak), in other words to manage the consequences of its future performances that will be made possible by quantum computers.

This transparency, synonymous with trust in the algorithm, is a major strategic issue for the company. It will highlight the value judgment of its designer, and therefore the company that operates it, giving it a significant competitive advantage in a market where customer loyalty has become a must.

Ethics and accountability

On the legal side, the question has already been asked by the European Parliament whether or not to endow robots with legal personality. The broader reflection on AI liability is still in its infancy. The Product Liability Directive (85/374 / EEC) lays down rules which are not suitable in the context of IoT and robotics. It must be clarified to meet the challenges of AI, cybersecurity and IoT in order to ensure a consumer-friendly internal market that is conducive to competition. This revision will have to integrate the increasing autonomy of the systems (sensors, software), the digital rights resulting from the RGPD and the e-Privacy regulation.

This work is ongoing within the European Commission with the mid-term review of the Digital Single Market Strategy and in the European Parliament with the preparation of a report on “The cost of non-Europe on robotics & AI “. Human rights would also be enriched with a right to brain integrity and a right to cerebral intimacy.

The original version of this article was published on The Conversation.

Source: https://www.theconnectedmag.fr/ethique-intelligence-artificielle-sante/

Plus

Leave a Reply

Your email address will not be published. Required fields are marked *

Close