It’s important to ask the right questions about AI activity, especially given the acceleration in AI adoption caused by the pandemic. In particular, the question of which questions to answer is in the focus of AI experts and practitioners who, through the introduction of AI in the company, are coping with a survey recently carried out by McKinsey.

Of the respondents in high-performance AI companies 75%. report that AI spending has increased across all business functions due to the pandemic Global survey on A.Myself at McKinsey for 2020. These organizations use AI to generate value that is increasingly being generated in the form of new revenue.

Three experts discussed the effects of this growth with AI trends in interviews in anticipation of the AI World Executive Summit: The Future of AI, practically take place on July 14, 2021.

David Bray, PhD, is the inaugural director of the non-profit organization Atlantic Council GeoTech Center and a contributor to the event program;

Anthony Scriffignano PhD, is Senior VP & Chief Data Scientist at Dun & Bradstreet;

And Joanne Lo, PhD, is the CEO of Elysian Labs.

What would you like to highlight at the AI ​​World Executive Summit?

David Bray, PhD, inaugural director of the Atlantic Council’s GeoTech Center

David: “AI is best when it helps us figure out what questions we should ask it in order to answer them. We live in a world that is changing rapidly. In a way, we are not yet aware of the full extent of these changes– –especially during the COVID-19 pandemic. When we know the right questions, we can work towards a better world. AI can help maintain a digital mirror of our work as companies, governments and societies – – and strive to be better versions of ourselves. “

He notes that when an AI system produces a biased result, “the data we feed into it reflects what is a reflection of us. Part of the solution is to change the data they are exposed to. “

Joanne: “When you have a rough idea of ​​what you’re looking for, the AI ​​will help you refine your question and get there. Think of this as a smart version of autocomplete. But instead of completing the sentence, it completes the whole idea. “

For example, tell your digital assistant that you want to take a ride tomorrow. When you know what you like, your story, and your age group, come back with a suggestion to go to the beach tomorrow. “You have to ask yourself what that means. Is your decision-making process a collaboration with the machine? How much are you willing to work on it with a machine? How much are you willing to give up The answer is very personal and situation-dependent. “

She adds, “I might want the machine to tell me my optimal vacation spot, but I might not want the machine to pick my child’s name. Or maybe yes. It depends on you. The decision is personal, wWhich means the question you should be asking is how much are you willing to give up? What’s your limit “

And the questions you ask AI should be questions that aren’t easy enough for Google. “You’re pretty sure that Google can’t help you with where to send your child to school, language immersion program, math immersion program, or STEM research program. That’s up to you.”


Lessons from the pursuit of ethical AI

What lessons have we learned so far from the experiences of Timnit Gebru and her boss Margaret Mitchell, the AI ​​ethicists who are no longer with Google?


Anthony Scriffignano, PhD, Senior VP and Chief Data Scientist at Dun & Bradstreet

Anthony: “If industry doesn’t take the lead in trying to do something, regulators will. Industry can work well with regulators by regulating itself. Ethics is a huge area that requires a lot of definition.

“The OECD [Organization for Economic Cooperation and Development, for which Anthony serves as an AI expert] works on principles of AI and ethics. Experts around the world really lean on it. It’s not as easy as everyone wants to do it. We better lean in because it will never be easier than it is today. “

Repeating Lo’s thoughts, he said, “We are already taking a direction from our digital agents. When Outlook asks me to go to a meeting, I’ll go. The question is, how much are we willing to give up? When I think the AI ​​can make a better decision for me, or set me free to do something else, or protect me from my own bad decision, I tend to say yes. “However, when he has to think about ethics and marginalization, things get more complicated.

He added, “In the future, we cannot just let the computer tell us what to do. We have to work with it. The AI ​​will move closer to the advice we are more likely to take. “

David: Detecting that often the real concerns and nuances of the issues are not addressed in depthhe Notit, “We hear what both sides want to say.” In the future, he would like to see a certain degree of participation or control with experts outside the company. “If the public doesn’t feel like they are involved in data and AI, people will fill the space with their own prejudices and there will be disinformation about it. This indicates that, from the start, companies need to proactively think about how to involve various members of the public such as ombudsmen. We need to find ways to do AI with people so that hiccups don’t mean, “I don’t know what’s going on behind the curtain.”

He advises, “Let’s say everyone strives to do the best they can. The incentives to motivate them could be in different places. If everyone believes they are doing the right thing, how can you find a structural solution for tracking data and AI that will give people confidence that the structural system will be less biased? It’s a nice thing to work towards data trust. The first step is to feel like you have the freedom of choice and control over your data. ”

“When a company’s business is based on the exclusivity of the data it has, it can be more difficult to control the future of AI communication with people versus people. When a company says don’t pay attention to the assistant behind the curtain, which makes it difficult to instill trust. “

He noted that European countries are considering a stricter standard for data protection and other digital issues including AI. “The European efforts are well-intentioned and must be balanced.” The European efforts to define data protection standards for health data that have been recommended to him are being worked out through judicial proceedings over a period of 10 to 15 years, with questions raised as to whether this might stifle or hinder innovation in health care. At the same time: “China’s model is that your data belongs to the government, which neither the US nor Europe wants to track.”

He added, “We need to find some general principles of operation that will instill trust, and one way could be through human juries to review AI activity.”


One way to check for AI misconduct

About the idea of ​​an “AI jury” to review AI misconduct:


Joanne Lo, PhD, is the CEO of Elysian Labs

Joanne: “The most important lesson for me [from what we can learn from the recent Google ethics experience] is that government and politics have lagged behind in technology development for years, if not decades. I’m not talking about getting regulations in place, I’m talking about taking a step towards understanding how technology will affect society, and especially democracy in America, and what the government has to say about it. When we get to this point, we can talk about politics. ”

She explained, “The government is incapable of deciding which technology to use in our society. This delay in government understanding has become a national security issue. What happens when Facebook and all social media platforms develop as they did without government intervention is to become a platform that allows adversaries to exploit and attack the very foundation of democracy. ”

“What is the government going to do about it? Will the government work with the engineers who say this is wrong, that we want the government to step in, that we want better laws to protect whistleblowers and better organizations to support ethics? Is the government actually going to do anything? “

Anthony: “That’s interesting. You could agree on certain principles, and your AI would need to be verifiable to prove that it did not violate those principles. If I accuse the AI ​​of being biased, I should be able to prove or disprove it– –whether it is a racist or an affirmative tendency, or economically prefer one group to another. You could also conclude that the AI ​​wasn’t biased, but the data was biased. “

“It’s a very nuanced thing. If there is a jury of 12 colleagues, “peer” is important. They should be similarly informed and similarly experienced. Real juries come from all walks of life. “


Leave a Reply

Your email address will not be published. Required fields are marked *