If a self-driving car hits someone, who is liable? Or what if we soon have a home robot to do our cleaning jobs, which slaps our visitors in the face because of a mistake? Who is then liable? Liability rules do exist, but they could use a modern slant. That is also what Europe thinks: the European Commission wants to give the liability rules a new look.
The Responsibility of Artificial Intelligence
That new look has everything to do with artificial intelligence. Think of independently operating robots, smart drones and smart home equipment. The current regulations are 40 years old and therefore date from a time when we hardly had computers, let alone a self-driving car. Especially when it comes to artificial intelligence, that liability is a hot topic. After all, if technology makes its own decisions, who should be punished if that decision is not right? And what about updates: if a manufacturer is negligent in releasing updates, causing trouble, what can a judge do?
It is difficult, because if this is boarded up too much, you will notice that it will get in the way of innovation. However, it is not all that easy: there are also many ethical dilemmas involved. Consider, for example, what Neuralink does: to what extent can you apply technology to the human brain? To what extent is someone personally responsible for what he or she does? To what extent are you still a human and not a robot? It is a complicated story, but it is necessary that Europe has legislation and regulations for it to a certain extent. And those regulations must be future-proof and, insofar as this can take future inventions into account.