Privacy Focus Group: AI : Basic concepts and regulatory trends

2 min leestijd

Artificial Intelligence, a.k.a. AI is hot! Basking in the heat of an ‘AI summer’, the field is nevertheless not without some burning issues regarding ethics and legal controls. This lucid webinar organized by the Privacy Focus Group helps you gain a much-needed insight.

AI is hot, but beware…

Conceived in 1956, during the ‘Dartmouth Summer Project’, AI has lived through a series of ‘summers’ of great promise… and ‘winters’ of lingering disinterest. Today, at 65 years of age, it looks like a magical and indispensable tool in every field of human endeavour, though sometimes perhaps too enthusiastically applied. Indeed, there’s still the question of ‘what is AI?’ – the starting point of the webinar ‘AI, basic concepts & regulatory/ethical trends’ by dr. Jan De Bruyne and Thomas Gils, both at the Centre for IT & IP Law (CiTiP) of the KU Leuven.

What is AI?

“That’s an easy question to ask, but a hard one to answer,” quotes dr. Jan De Bruyne, “There is no single definition of AI accepted by the scientific community.” However, one can distinguish between ‘narrow/weak AI’ (focus on one task, outperforms humans), ‘general/strong AI’ (exhibiting human-level intelligence) and ‘super AI’ (surpassing human intelligence). In this classification, all of today’s AI applications are ‘narrow’.

Two major ways of approaching AI are ‘top down’ knowledge based (an 1956 idea, with knowledge input by experts) and ‘bottom up’ data driven (today’s popular approach, facilitated by the data tsunami of recent years). The latter approach is based on ‘machine learning’, of which there are several families. In this webinar, the focus is on supervised (labelled data), unsupervised (unlabelled data), reinforcement (reward/punishment) and deep learning. As it is, learning methods can have unforeseen side effects, through data set bias, with biased results and unwanted societal impact.

Ethical and societal AI

The ubiquitous application of AI raises questions about ethical aspects of AI. Thomas Gils points out the role of ethics in design (acceptable for all stakeholders), ethics for designers (codes of conduct…) and ethics by design (what should AI do). And yes, ethics are relevant because AI is not (yet) robust (e.g., racist pitfalls), controllable (deep fakes), transparent (too much ‘black box’) and reliable (bias prone)!

However, numerous initiatives have started to tackle the regulatory and ethical frameworks for a ‘trustworthy’ AI. In the EU, a high-level expert group already issued some publications, with Ethics Guidelines for trustworthy AI (7 key requirements). The UNESCO, too, has a draft text (2021) on AI ethics. In 2020, the European Commission produced a white paper on AI (to regulate and stimulate AI), with in 2021 an EC proposal for a Regulation (the AI Act, with a risk based approach), as one of a set of data related acts.

Others initiatives have been taken by the Council of Europe, the OECD, the World Intellectual Property Organization, and the Global Partnership on AI, as well as by countries as the US and China. These efforts are welcome and needed, but many challenges remain (ethical impact assessment, digital literacy…) as a concluding ‘product case’ by dr. Jan De Bruyne illustrates. Clearly, this webinar has only scratched the surface of a field that will increase in importance and urgency in the coming years, but serves as a splendid starting point.

(Visited 53 times, 1 visits today)

About the author

Guy Kindermans is a freelance journalist, specialized in information technology, privacy and business continuity. From 1985 to 2014 he was senior staff writer at Data News (Roelarta Media Group).