Interview - Thought-provoking Conversation With AI Expert, Joanna Bryson
How designing decision-making processes in which algorithms and humans work together can help us root out AI bias without putting the brakes on innovation.
Join the DZone community and get the full member experience.Join For Free
It's easy to spend hours doom-scrolling bad news about algorithms breaking bad. But instead of jumping down a dystopian rabbit hole, let's take some 'woosah' time, and rewind the tape on AI visionary Joanna Bryson (@j2bryson), as she drops some serious knowledge on:
- Rooting out bias in AI.
- Regulating AI without taking the magic out of AI innovation.
- Prioritizing due diligence in software development.
By the way, Bryson is Professor of Ethics and Technology at the Hertie School of Governance in Berlin, where she educates future technologists and policymakers on AI governance, ethics, and collaborative cognition. In 2020, Bryson was among nine experts nominated by Germany to the Global Partnership for Artificial Intelligence. On top of that, she was also recognized as a top digital influencer to watch in 2021 by the European Digital Development Alliance.
The truth is AI is coming whether we like it or not. It has moved from fringe to mainstream at astonishing speed. Business is betting everything on it. Last year, companies invested about $50 billion in AI systems. Global spending on AI is expected to double by 2024. Government agencies are pouring billions into it. And calls to root out and mitigate bias in AI-driven decision-making systems are reverberating through C-suites and boards everywhere.
So, Where Do We Go From Here?
That's the big question. It seems like we've come to a crossroads in what Harvard Business Review calls the age of deployed AI. Where we go from here, argues Bryson, is more about the governance we choose to guide the evolution of AI than anything else. But there's a growing sense of urgency among business leaders, lawmakers, and technologists to double down on human-centered AI. In other words, it's time to prioritize due diligence and use-cases involving AI-based risk scoring systems in critical areas such as criminal justice, social services, healthcare, and more.
Yes, accountability for AI ethics may ultimately rest with boards and C-level execs. But at a grassroots level, software developers can also play a vital role in the never-ending battle to scale the best AI use cases for the public good?
"What's important, says Bryson, is for us to understand how AI makes decisions, how it determines what we see and what we don't, and who's accountable when it breaks bad."
This brings us to this pragmatic remix of a thought-provoking conversation with AI expert Joanna Bryson!
Question: Are you an optimist or pessimist about the impact of AI on society?
Bryson: Some people call me an optimist. But others think I'm a complete techno-phobic pessimist.
Question: When you think about the amazing evolution of AI, what worries you about the future, what keeps you up at night?
Bryson: A future where we cannot be ourselves, where our entire background is online, and we can be penalized for that. I'm seriously worried about that right now. So, I'm very aware of the downside of the "surveillance state". And I don't really see a way to get around that. People are going to know who we are, what we do, what we've done, and what we're inclined to do.
Question: Some experts say that AI reflects the values of our culture and that we can't get AI right if we're unwilling to get right with ourselves. What's the key to getting AI right?
Bryson: I think that what we have to do, and what's becoming the most important topic, is how do we manage our governance of AI. How do we coordinate our actions through governance to protect individuals.
Should Algorithms Be Regulated?
Question: So, in the context of regulation and governance, what advice would you give business leaders about ethically developing AI?
Bryson: The most important thing I'm pushing right now with businesses and regulators is that we need more accountability with AI and this is doable.
Question: What about fears that regulating AI would stifle innovation. What do you make of that?
Bryson: There has been a lot of smoke and mirrors around AI. We recently had high-profile engineers going out there and saying that if you regulate us, you're going to lose deep learning, which is the magic juice of AI innovation. But, now you have major tech companies saying that they do believe in regulation for AI.
I think it's important to recognize that we can track and document whether or not you follow due diligence in AI development. The process is no different than in any other industry. It's just that AI has been flying under the radar.
Question: What's the big deal about transparency in AI, why does it matter?
Bryson: Right now, a lot of people are releasing code, and they don't even know which [code] libraries they're linked to, where they got these libraries from, and whether they've been compromised or have back doors."
So, we just need to be more careful. It's like the (Morandi) bridge that collapsed in Italy in 2018. When you don't know how good the materials were that went into a construction project, or whether shortcuts were taken, then you can't really say how strong your bridge is.
Today, we have laws about that with respect to bridges and buildings. But if you go back a few centuries, anyone rich enough could construct a building anywhere they wanted to. Now, if you want to put up a building, you've got to go before the planning commission. Your architects have to be licensed. And all that stuff happens because buildings really matter. If they aren't constructed well, they fall down.
Can You Stand Behind Your Software?
Question: So, you think we need to get more serious about due diligence in software development?
Bryson: We should go through procedures to make sure that the innovations we make are sustainable. We should be able to prove that we can stand behind our software. And we should be held accountable for what our software does.
Question: That sounds reasonable. But, practically speaking, how do you do that in the real world?
Bryson: So, I heard a really great story along these lines. Almost every car on the road has some level of AI in it. And one of the things that AI does is help you stay in your lane.
But a man in Germany had a stroke while driving. And it was the kind of stroke that left his hands on the wheel and his eyes open.AI looks to see if you're falling asleep at the wheel. But in this case, the car thought the driver was okay. So, the AI maintained his lane and kept the car going straight. And it ended up in a horrible accident.
Prosecutors looked at the car company to find out what they had done wrong. But the company was able to show that they had followed best practices and convinced the prosecutors that there was no case to bring.
Question: What's the takeaway for business leaders?
Bryson: No matter what industry you're in, you should be able to show that you followed very careful methods of software construction including using machine learning. If you run a bank, and you have an accounting problem, you don't look at the synapses in the brain of your accountant. You look at whether the accounts were done properly.
And I think the same thing can be done with machine learning: "Here's the data that I used. Here's how I trained it. Here's how I used best practices." That's how you reduce risk.
Published at DZone with permission of Roland Alston, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.