Who or what assumes responsibility in the event of an accident involving an autonomous vehicle?
Is it perhaps inevitable to create algorithms that encompass the biases of their creators and our society at large?
How do the forms of privacy and data protection evolve in a world where people’s online behavior is traced, logged and sold for profit and technological advancement?
The future is now, and these are questions we have to face today. They are urgent, and they are complex. Just like AI systems exist in a multi-stakeholder environment where social, legal, and economical models are at play with one another, the process of solving these issues must include experts from many different fields.
Accountability is the ability to give an account of one’s actions. For an algorithm, this refers to producing an explanation for an action or decision that was made, in a way that is understandable and useful to humans. Accountability helps towards increasing trust in AI systems since it provides justification for decisions that seem out of nowhere to the everyday person.
At their core, AI systems are tools created by humans and given a specific purpose. Thus, no matter their degree of autonomy and learning, the role and responsibility of humans cannot be overlooked. Responsibility refers to the duty to answer for one’s actions (and not merely explain them, like in the principle of Accountability). Ultimately, it has to do with embedding moral values and taking into account the societal impact of the AI systems during all stages of their development. “True responsibility in AI is not just about how we design these technologies but how we deﬁne their success”
Transparency refers to the visibility of the factors used by an algorithm to make a decision. The result should be reproducible and the method used to produce it should be available to all the stakeholders affected by it. This can be tricky not only because of intellectual property rights, but also because of the opacity problem in ML. However, Transparency should also refer to the openness of the data sources used to train algorithms, so that they can also be subject to inspection.
20 September, 2021
10:00 - 20:00
The symposium will be a hybrid event.
To ensure the health and safety of everyone involved in-person, this will be a test-for-entrance event.
In order to enter the event you will be required to show proof of vaccination, recovery or a negative covid test. You can book one for free here: https://www.testenvoortoegang.org
Don’t forget to download the CoronaCheck app and upload your QR code!
If you are unwilling or unable to attend the entirety of the event in person, there will also be a livestream set up! Don’t forget to choose that option when signing up. Lunch will be provided for the first 20 people who sign-up for the live-stream option.
Tickets are now available!Sign up!
|10:15-10:55||Hans de Zwart: Introduction to the topic of racist technology|
|11:10-12:30||Panel on Algorithmic Bias: Challenges and Solutions|
|12:30-14:30||Lunch break at Vapiano|
|14:30-15:10||Raphaële Xenidis: On equality law and algorithmic discrimination|
|15:20-16:00||Sicco Verwer: "Three naive Bayes approaches for discrimination-free classification"|
|16:10 - 16:50||Luciano Cavalcante Siebert: Meaningful Human Control over Artificial Intelligence|
|17:00-18:00||Daniel Domscheit-Berg on Data Privacy and Cyber-security|
Daniel Domscheit-Berg is an activist and IT security expert. He helped build the WikiLeaks platform from late 2007 to September 2010, and acted as its spokesperson under the pseudonym Daniel Schmitt. Domscheit-Berg wrote a book about his experiences “Inside WikiLeaks“. He has also presented and debated on topics regarding risks in a digitized and networked world in a multitude of conferences worldwide.
Hans de Zwart is a researcher and lecturer at the Amsterdam University of Applied Sciences. As a philosopher, he focuses on the ethics and philosophy of technology. He is one of the founders of the Racism and Technology Center and sits on the board of the Correspondent Foundation. Earlier, Hans was the Executive Director of the Dutch digital civil rights organisation Bits of Freedom, fighting for freedom of communication and privacy on the internet. In the past he was Shell’s Senior Innovation Adviser for Global HR and Learning Technologies, before that a Moodle consultant for Stoas Learning.
Andrea's research interests span logic, mathematical foundations of computer science, responsible AI development. She works on concrete tools to explain, audit and develop intelligent systems responsibly. Her research interests are in logic, aspects of the mathematical foundations of computer science, and its uses for the development of tools. She is a member of the Responsible AI team, working on concrete tools to explain, audit and develop intelligent systems responsibly.
Raphaële is a Lecturer in EU Law at Edinburgh Law School. Her main research interests are in discrimination and equality law, critical legal theory, law & society and legal mobilisation, human rights law and law and technology. In particular, she has been working on issues of intersectionality and intersectional discrimination in the framework of her Ph.D. dissertation and on algorithmic discrimination, bias in automated decision-making systems and data-driven inequality as part of her postdoctoral research project.
Ivo is a Data Scientist at Dataprovider.com. His work covers the whole engineering process for machine learning models, from design to research and development and finally deployment. For Dataprovider.com these models are applied to classify or extract information from the semi-structured nature of public websites. Ivo's interest mostly lies in the automatic classification and discovery of malicious websites including fraudulent webshops, fake-news sites, and Cybersquatters.
Sicco Verwer is an assistant professor in machine learning with applications in cyber security and software engineering at TU Delft since 2014. He has worked on several topics in machine learning and is best known for his work in grammatical inference, i.e., learning state machines from trace data. Sicco Verwer focuses on using machine learning for tasks other than prediction, such as analysis, optimization, control, and verification.
Luciano Cavalcante Siebert is an assistant professor at the Interactive Intelligence Group, Delft University of Technology, where he focuses on Responsible Artificial Intelligence. He studied Mechatronics at PUCPR (B.Sc.) and did his M.Sc. and Ph.D. in electrical engineering at the Federal University of Paraná (UFPR), Brazil. Prior to his current position, Luciano was a postdoc at TU Delft’s AiTech initiative. Luciano has +8 years of experience in research on developing and applying intelligent techniques to robotics, machine learning, optimization, and automation.