"Artificial Intelligence: Seeking an ethical conscience"

Yesterday attended a very beautiful discussion titled "Artificial Intelligence: Seeking an ethical conscience" organized by General Assembly (GA) and Accenture Applied Intelligence.

The panel discussion was moderated by Stephen Tracy, COO Milieu Insight and panellists included Joon Seong Lee, MD, Accenture Applied Intelligence, ASEAN, Rumman Chowdhury, Global Lead for Responsible AI and Deborah Santiago, MD Legal Services, Digital & Strategic Offerings.

Stephen started the discussion by highlighting a survey, that claimed that 74% of Singaporeans felt that their life is getting impacted by AI and 77% felt that their life will be better thanks to AI.



The Lawyer advocating the need for regulatory compliance
Deborah in her introduction highlighted that she has been with Accenture for over a decade and has worked on the legal aspects around breakthrough technologies. She highlighted the need for regulatory compliance and made a case for considering the legal perspective in emerging tech. She specifically highlighted algorithmic biases resulting in discrimination and the issue of privacy rights. She also acknowledged the gaps in the legal and regulatory framework and how in those instances the decision-making relied on compliance with organizational values. The comment, "the law is going to be a moral minimum and ethics will always be a higher bar", aptly summarizes her description of law and ethics.  

The Data Scientist pitching for ethics in AI
Rumman, with a background in the quantitative fields, is an AI professional in a rather forward-thinking role. She highlighted how "Tech Ethicist" was one of the top 5 AI jobs predicted for 2019. She felt that the role transcends legal, technology, social sciences and perhaps philosophy spaces. She felt that much progress has been made in this space over the past 2 years and there have been significant solutions created for Responsible AI applications including the Algorithmic Fairness Tool, a bias-investigation and correction tool from Accenture. (A related ethical challenge was highlighted by Standard Chartered Bank's Shameek Kundu around the factually correct machine recommendations which may disadvantage some groups of people). Rumman also made the interesting point that Singapore was becoming the center of gravity to understand the Asian perspective.

The Practitioner highlighting the need for explainability and transparency of algorithms
Joon Seong Lee, with over 2 decades of consulting experience, advises clients across industries in the space of AI, Big Data & Analytics. He pointed that Responsible AI was becoming an important concern for clients and a lot of effort was required to collectively develop the ethics framework for humans. Joon Seong highlighted that this effort cannot be done by a single company and also highlighted the need for explainability and transparency of algorithms.

Responding to an audience question around explainability and transparency of algorithms, Rumman talked of some recent papers and developments like Local Interpretable Model-Agnostic Explanations (LIME), a technique to explain the predictions of any machine learning classifier and evaluate its usefulness in various tasks related to trust. She also highlighted use cases of how natural language processing was being used to explain the image selection process by the algorithm. In addition, she briefly mentioned the other techniques used for improving the algorithmic explainability (Game Theory, optimization functions, counterfactual explanations etc).

Deborah highlighted the need to explain the algorithms to simple people and made a strong pitch for the need for diversity of teams (Not just gender and class). She felt that if simple people understand why an algorithm succeeds or fails then that would lead to improved decision making.

The economic impact of AI
Quoting an Accenture and Frontier economics study, Stephen highlighted how AI can push Singapore's productivity 35%, by 2035. Joon Seong cautioned that the study was a statement of potential and not a statement of certainty. The key question according to him was to achieve these productivity increases in a responsible way. Joon Seong felt that most industries will benefit from AI, in Singapore the public sector was at the forefront of AI adoption such as in the areas of safety and security. He highlighted the progress in video analytics, image processing as a case example.

Rumman acknowledged that AI will be a factor for growth and represented a certain potential energy. She had two interesting perspectives:
a.     Practical perspective: She highlighted how Land, Labour and Capital were required for the traditional models of growth and how digital growth was eschewing those key ingredients. However, she pointed that enabling factors need to be in place before the AI potential energy could be converted into kinetic energy. She pointed at the least you needed clean usable data and strong business cases for AI to realise its full potential.
b.    An idealistic perspective: Technology has progressed far and today we are at a point wherein we can check for emails every few moments. However, the conventional economic measures prioritize dollars and cent values over perhaps equally important factors like Happiness. The less emphasis on those factors, she believed were a key reason for millennial burnouts. She pointed to some interesting efforts like "The happiness project", where alternatives were being thought of.

Stephen highlighted his experience with Sapient Nitro and acknowledged that there was always a gap between the appetite for and the actual impact of some of the breakthrough technologies.  

Need for governance in ethics and ethics education for techies
On the question of how to take ethics seriously, Deborah highlighted that clear laws and regulatory frameworks exists for issues like cybersecurity and privacy (eg: GDPR). She felt that the need is to create an environment of trust, and highlighted how compliance was moving from a reactive to a predictive mode.

Rumman made a very interesting point that AI is a tool and rather than regulating the use of a tool, the focus should be on outcomes for which clear regulations do presently exist. So, the key question would be to examine how the laws apply to AI and identify any gaps. Deborah pointed that the key is to make regulations symbiotic and facilitate innovation. She pointed to how early regulations in the e-commerce space enable the evolution of the trust mechanism that are in place today.

Stephen talked about Robert Moses, and how bridge building in New York had an inherent bias against the lower classes. The panel was unanimous in pointing that there is a need to avoid programming unethical biases into AI. It was insightfully pointed that AI doesn't use free will to decide and instead any AI bias has a human origin and putting the blame of those biases on AI is erroneous. Rumman termed this a case of outsourcing moral obligations, ie AI is human when we want it to be, and simply a machine otherwise.

Repeating the previous points it was highlighted that a way to handle biases in AI is through explainable AI implemented through simple and transparent processes. Rumman made the interesting point that before attacking the bias, it’s crucial to ask whether AI should be deciding on those actions? She made a case of human-centric AI, where AI enables a human to make a decision. She talked about the moral machine thought exercises on questions like an autonomous car having to choose to save either a baby or a young woman from dying. Deborah made a comment that sometimes these big philosophical questions prevent us from thinking about more practical issues, i.e. we are focused on the philosophical exercise of whom the car hit, but not thinking about the sensors that might be used inside a self-driving car and what kind of information might be picked up about us. The conclusion that there was a need for a governance system that maintained the dignity of individuals. The rich discussion also talked about the following fascinating points:
-      Implementation of AI should not discriminate against any group of people.
-      Should AI be held to a higher standard??
-      Can you make an AI explain human ethical motivations??
-      Will machines become fully autonomous?

Stephen quoted a survey that stated that 92% of the AI leaders said that they were training their developers in ethics. During the following discussion the key question raised was whether this was correct and what does it mean? An interesting perspective shared was that most companies today spend considerable resources on ethics and conduct related training and Accenture too had its version called "Conduct Counts". The key question, however, was what was the ethical gap present and the additional ethical education that was needed for technologists.

An important point made by Rumman was on the existing setups and ethical frameworks (ethics boards etc) present and the need to make them broader. She spoke of the Markkula Center for Applied Ethics, at the Santa Clara University which has made its ethics courseware available online for anyone to use. Deborah added that there was a need to remind people of core values and existing legal and regulatory frameworks so that they are better equipped to solve hard business ethical questions.

Concluding remarks
The panel made some succinct concluding remarks. Rumman made a case for "Everyday Ethics". Joon Seong advised effort to "Explain the unexplainable" and Deborah made the fascinating point that "Complex questions need diverse teams".



Comments

Popular posts from this blog

Disappointing IIM Chat with Rahul Gandhi in Singapore

Tond Reduction Contd . . .

Yet again an awesome Diwali :)