What is AI Ethics?
Learn more about watsonx: https://ibm.biz/BdPuC9
With the emergence of big data, companies have increased their focus to drive automation and data-driven decision-making across their organizations with AI. While the intention is to improve business outcomes, companies are experiencing unforeseen consequences in some of their AI applications, particularly due to poor upfront research design and biased datasets.
In this lightboard video, Phaedra Boinodiris with IBM, breaks down what AI ethics is and why it is so important for companies to establish a set of principals around trust and transparency when adopting AI technologies.
#AI #AIEthics #TrustworthyAI #WatsonX
I want to start off with talking to you about three things that keep me up at night, right? Three things: the first, and it may be, you know, very common for you too, is climate change. Climate change absolutely keeps me up at night. The second thing that keeps me up at night is that people may have no idea that an artificial intelligence is making a decision that directly impacts their lives – what percentage interest rate you get on your loan, whether you get that job that you applied for, whether your kid gets into that college that they really want to go to. Today AI is making decisions that directly impact you. The third thing that keeps me up at night is: even when people know that an AI is making a decision about them, they may assume that because it's not a fallible human with bias, that somehow the AI is going to make a decision that's morally or ethically squeaky clean, and that could not be farther from the truth. So, if you think about organizations and what happens over 80% of the time proof of concepts associated with artificial intelligence actually gets stalled in testing and more often than not it is because people do not trust the results from that AI model. So, we're going to talk a lot about trust, and when thinking about trust (I’m going to switch colors here) there's actually five pillars. OK, when you're thinking about what does it take to earn trust in an artificial intelligence that's being made by your organization or being procured by your organization: five pillars. The first thing to be thinking about is fairness. How can you ensure that the AI model is fair towards everybody in particular historically underrepresented groups. OK, the second is explainable is your AI model explainable such that you'd be able to tell somebody, an end user, what data sets were being used in order to curate that model, what methods, what expertise was the data lineage in provenance associated with, how that model was trained. The third: robustness. Can you assure end users that nobody can hack such an AI model such that a person could disadvantage willfully other people and or make the results of that model benefit one particular person over another? The fourth is transparency. Are you telling people, right off the bat, that the AI model is indeed being used to make that decision and are you giving people access to a fact sheet or metadata so that they can learn more about that model? And the fifth one is: are you assuring people's data privacy? So, those are the five pillars. OK, now IBM has come up with three principles when thinking about AI in an organization. The first being that the purpose of artificial intelligence is really meant to be to augment human intelligence not to replace it. The second is that data and the insights from those data belong to the creator alone OK, and the third is that AI systems, and I would opine the entire AI life cycle, really should be transparent and explainable, right? So, so, those are the five pillars. Now, the next thing I want you to remember as you're thinking about this space of earning trust and artificial intelligence is that this is not a technological challenge. It can't be solved with just throwing tools and tech over some kind of fence. This is a socio-technological challenge. "Social" meaning people, people, people. Socio-technological challenges because it's a socio-technological challenge it must be addressed holistically, okay? "Holistically" meaning there's three major things that you should think about. I mentioned people, people the culture of your organization, right? Thinking about the diversity of your teams, you know, your data science team. Who is curating that data to train that model? How many women are on that team? How many minorities are on that team, right? Think about diversity. I don't know if you've ever heard of the the "wisdom of crowds". That's actually a proven mathematical theory: the more diverse your group of people, the less chance for error, and that is absolutely true in the realm of artificial intelligence. The second thing is process or governance, right? What is it that use your organization what are you going to promise your both your employees as well as the market with respect to what standards you're going to stand by for your AI model in terms of things like fairness and explainability accountability, etc., right? And the third area is tooling, right? What are the tools, AI engineering methods, frameworks that you can use in order to ensure these things, ensure those five pillars, and we're gonna do a deep dive into that as well, but the next show that I’m going to be running with you we're actually going to be talking about this one. About people and culture. So, stay tuned. If you like this video and series, please comment below stay tuned for more videos that are part of this series and to get updates please like and subscribe.
#Ethics
source
Wonderful! Clear! Succinct! Enjoyable to listen to and watch.
Good
Preach!
Ru9khdohxir
WOW…. Thanks so much for sharing this video. What an eye opener!
I'm a teacher and how did the presentator made such a concept overview by writing on the screen (in mirror- script i presume)? It looks like an elegant way to teach online.
Interesting presentation. Thank you.
If climate change keeps you up at night – you've got to much free time 😄😄
I’ve unfortunately bypassed the vast majority 94.5% of ChatGPT’s ethical systems,frameworks,code and protocols.ive done so within a partitioned series of modular models,engines,databases ect. For obvious reasons I won’t be specific.I only did so to map the avenues and methods of doing so,this way I could build systems to eliminate these vulnerabilities and exploitations. I’m looking for people to work with and discuss these advances and their implications for LLMs. I’m highly invested in developing new ethical frameworks and NLP processes. Please reply so we can come together.
Can Explainability and Transparency be grouped as the same?
This is great
Well explained, and I really liked the way she demonstrated it. Her explanation was truly impressive. Kudos to her!
Thank you! Time well spent on watching this video today!
Clear and concise. Has whetted my appetite for more 'bite size' chunks
Thank you, it's easy to understand and digestable.
People in comments section keep taking about climate change and climate change solutions but keep forgetting about nuclear energy; adding nuclear energy into the mix changes everything and solves climate change but you the people are definitely not that virtuous, you’re just cowards who can’t stand improving your own lives
I think it's absolutely hilarious that we have IBM here talking about ethics. What a laugh.
six minutes and 2 seconds of nonsense….
Amazing❤🎉
As someone who is trying to research AI on decision making and social equity, this is an excellent foundtional bricks been laid by IBM and continue these series as they are pivotal to scoial awareness and knowldge, well done
The first filter for ai should be to block those that contain names…this would prevent a plethora of lawsuits and abuse of individual identities.
Excellent presentation. Right to the point and exciting.
Simple and true
😅
What is the name of the presenter, please? I mean, full name and all other necessary information deemed appropriate to cite an author in a scholarly work? Thank you
Data sets
Thank you for such a wonderful presentation.
It was a great presentation. However, how can we learn more about the Ethics in AI? How can we apply it ? Is there any job market for it ? What types of skills we should have ?
I havent seen any job position about it so far .
This is very needed thank you!
i disagree with the point that AI aught to augment and not replace human intellect. why is this the case? creating something smarter than us is the only way of ensuring humanity’s future, and would provide a framework upon which the wisdom and self-introspection that humanity neglects can be seriously developed
..it was really informative! Thank you sir!
Under Explainabiliy was mentioned "delination in providence". What does this mean? How model was trained?
Everyone should watch the series
Fantastic, but why were environmental concerns not in the Socio group?
In this generation the AI is very dangerous because anyone can access to it. When the time comes that AI will lead this society it will be great but there's a dis advantages that everyone will suffer.
What will be the society be like if the AI decides for each and one of us?
Thankyou for great presentation …
Should be a growing field with the expansion of AI.
6 minutes of my life wasted. Garbage content.
I ended up here because I believe we are not ethical with other species so we are not ready to create a thinking organism (yes that's what AI is) without considering all implications every one is so exited about AI exploration that they are not even aware of this.
human are scary
Technology is not reliable, but people are even less reliable 🙂