The human side of AI

A human hand shakes a robotic hand

How do we incorporate ethics into AI as we find practical applications for new tools?

Slightly uncanny generated art and ChatGPT-written documents are everywhere, and it’s possible because artificial intelligence (AI) is fed information, learns from it, then recognizes and duplicates patterns. So where does that information come from in the first place? People. Large language models and art generators can’t work without human input — and the vast supply of human input on the internet isn’t all created equal. Which begs the question, how can we ensure the information being fed into AI is representative of equity-seeking groups?

Experts like Dr. Bill Rankin are thinking about the answers. Bill carries more than 20 years of experience working with post-secondary institutions, governments and learning organizations to design, develop and implement innovative learning initiatives. His work experience has taken him around the world supporting the integration of new technology. Hes held roles with Apple and pi-top as Director of Learning and founded his own company, Unfold Learning LLC. Now filling the role of Expert in Residence at SAIT, Rankin shares his thoughts on keeping ethics at the centre of this rapidly advancing technology.

Dr. Bill Rankin and Dr. Raynie Wood pose together

Dr. Bill Rankin and Dr. Raynie Wood, Dean, School for Advanced Digital Technology

“When we talk about AI creation, it's a bit of a misnomer because it's really AI replication, not creation. It can create new things, but it's creating them based on the patterns it's learned. And from a human standpoint, this actually poses interesting and complicated challenges because the AI will learn whatever patterns it is fed. Many of the patterns that humans create have, for example, biases in them.”

A concern with the algorithm searching for patterns from a vast data source means it is pulling the predominant information and distilling it down to a median response. Rankin believes that if we place trust in content produced by AI without thinking critically to understand the process, we will be silencing the voices and perspectives of marginalized or non-dominant societal groups.

“We need to be able to generate the kinds of products that make sense in diverse communities and not just in the dominant community. The danger of AI is that in averaging everything together, it averages out diversity. It averages out voices that we need to hear.”

We’ve identified some of the threats, now what about the opportunities?

Rankin believes theres space to look between the lines — for society to recognize the bias and oppression which can happen when a predominant culture's voice outweighs the voices of marginalized communities. If users are made conscious of bias or exclusionary patterns within the data, it gives us a choice and adds to the process of humanization.

“If AI is looking at patterns and what it produces is negative, that's certainly an indictment of our own ethos and ethics, and suggests we need to spend more time understanding diversity by talking with people who are unlike us in productive ways.”

If we approach AI as a tool to incorporate into the starting point of learning versus the answer or solution, Rankin sees potential for practical application. We’ll develop our human side by looking at the patterns identified by AI and using a critical eye to recognize and understand what those patterns are telling us about ourselves.

“We talk about standing on the shoulders of giants sometimes, and I think what AI is really doing is letting us stand on more shoulders and see a lot further. Our reach should extend beyond what we might be able to do just on our own.

“And in that way, it's similar to education or travel or all these other things that expand our horizons. AI should expand our horizons.”

New World. New Thinking.

Learn more about our new course for the future #hereatsait

SAIT'S
2020-2025
Strategic plan