search
HomeTechnology peripheralsAINIST: AI bias goes far beyond the data itself

By now, no one should dispute that most artificial intelligence is built on and currently uses biases that are problematic in some way. This is a challenge that has been observed and proven hundreds of times. The challenge for organizations is to root out AI bias, rather than just settling for better, unbiased data.

NIST: AI bias goes far beyond the data itself

In a major revision to its publication, Towards Standards for Identifying and Managing Bias in Artificial Intelligence (NIST 1270 Special Publication), last year’s public Following the comment period, the National Institute of Standards and Technology (NIST) made a strong argument for looking beyond data and even ML processes to uncover and destroy AI bias.

Rather than blaming poorly collected or poorly labeled data, the authors say the next frontier of bias in AI is “human and systemic institutional and social factors” and push for a shift away from A socio-technical perspective looks for better answers.

“Context is everything,” said Reva Schwartz, NIST’s lead researcher on bias in artificial intelligence and one of the report’s authors. “AI systems do not operate in isolation. They help people make decisions that directly impact the lives of others. If we are to develop trustworthy AI systems, we need to consider all factors that could erode public trust in AI. Among these factors There are many that go beyond the technology itself and influence it, as highlighted by the comments we received from a variety of people and organizations."

Causing AI Bias What are human and systemic biases?

According to the NIST report, human beings are divided into two broad categories: individuals and groups, and there are many specific biases under each category.

Individual human biases include automation complacency, where people rely too much on automated skills; implicit bias, an unconscious belief, attitude, association, or stereotype that affects someone's decision-making; There is also confirmation bias, where people prefer information that is consistent or congruent with their existing beliefs.

Group Human foundations include groupthink, the phenomenon in which people make suboptimal decisions out of a desire to conform to the group or avoid disagreement; funding bias, when reporting is biased The results satisfy a funding agency or financial backer, which in turn may be subject to additional personal/group biases.

For systemic bias, the NIST report defines it as historical, social and institutional. Essentially, long-standing biases have been codified into society and institutions over time and are largely accepted as “facts” or “just the way things are.”

The reason these biases matter is because of how much impact AI deployment is having on the way organizations work today. Because of racially biased data, people are being denied mortgages, denying them the chance to own a home for the first time. Job seekers are being denied interviews because the AI ​​is trained to make hiring decisions that historically favor men over women. Promising young students are denied interviews or admission to colleges because their last names don't match those of successful people from the past.

In other words: Biased AI creates as many locked doors as efficient openings. If organizations don’t actively work to eliminate bias in their deployments, they will quickly find themselves experiencing a severe lack of trust in how they think and operate.

At its core is the recognition that the results of any AI application are more than just mathematical and computational inputs. They are made by developers or data scientists, their positions and institutions vary, and they all have a certain level of burden.

NIST’s report reads: “A sociotechnical approach to AI considers the values ​​and behaviors modeled from data sets, the humans who interact with them, and complex organizational factors. These factors are involved in their commissioning, design, development, and ultimate deployment."

NIST believes that through a sociotechnical lens, organizations can improve , privacy, reliability, robustness, security and security resiliency” to foster trust.

One of their recommendations was for organizations to implement or improve their test, evaluation, validation and verification (TEVV) processes. There should be ways to mathematically verify biases in a given data set or trained model. They also recommend creating more involvement from different fields and positions in AI development efforts, and having multiple stakeholders from different departments or outside the organization. In the “human-in-the-loop” model, individuals or groups continuously correct the basic ML output, which is also an effective tool for eliminating bias.

In addition to these and revised reports, there is NIST’s Artificial Intelligence Risk Management Framework (AI RMF), a consensus-driven set of recommendations for managing the risks involved in AI systems. Once completed, it will cover transparency, design and development, governance and testing of AI technologies and products. The initial comment period for the AI ​​RMF has passed, but we still have many opportunities to learn about AI risks and mitigations.

The above is the detailed content of NIST: AI bias goes far beyond the data itself. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
How to Build Your Personal AI Assistant with Huggingface SmolLMHow to Build Your Personal AI Assistant with Huggingface SmolLMApr 18, 2025 am 11:52 AM

Harness the Power of On-Device AI: Building a Personal Chatbot CLI In the recent past, the concept of a personal AI assistant seemed like science fiction. Imagine Alex, a tech enthusiast, dreaming of a smart, local AI companion—one that doesn't rely

AI For Mental Health Gets Attentively Analyzed Via Exciting New Initiative At Stanford UniversityAI For Mental Health Gets Attentively Analyzed Via Exciting New Initiative At Stanford UniversityApr 18, 2025 am 11:49 AM

Their inaugural launch of AI4MH took place on April 15, 2025, and luminary Dr. Tom Insel, M.D., famed psychiatrist and neuroscientist, served as the kick-off speaker. Dr. Insel is renowned for his outstanding work in mental health research and techno

The 2025 WNBA Draft Class Enters A League Growing And Fighting Online HarassmentThe 2025 WNBA Draft Class Enters A League Growing And Fighting Online HarassmentApr 18, 2025 am 11:44 AM

"We want to ensure that the WNBA remains a space where everyone, players, fans and corporate partners, feel safe, valued and empowered," Engelbert stated, addressing what has become one of women's sports' most damaging challenges. The anno

Comprehensive Guide to Python Built-in Data Structures - Analytics VidhyaComprehensive Guide to Python Built-in Data Structures - Analytics VidhyaApr 18, 2025 am 11:43 AM

Introduction Python excels as a programming language, particularly in data science and generative AI. Efficient data manipulation (storage, management, and access) is crucial when dealing with large datasets. We've previously covered numbers and st

First Impressions From OpenAI's New Models Compared To AlternativesFirst Impressions From OpenAI's New Models Compared To AlternativesApr 18, 2025 am 11:41 AM

Before diving in, an important caveat: AI performance is non-deterministic and highly use-case specific. In simpler terms, Your Mileage May Vary. Don't take this (or any other) article as the final word—instead, test these models on your own scenario

AI Portfolio | How to Build a Portfolio for an AI Career?AI Portfolio | How to Build a Portfolio for an AI Career?Apr 18, 2025 am 11:40 AM

Building a Standout AI/ML Portfolio: A Guide for Beginners and Professionals Creating a compelling portfolio is crucial for securing roles in artificial intelligence (AI) and machine learning (ML). This guide provides advice for building a portfolio

What Agentic AI Could Mean For Security OperationsWhat Agentic AI Could Mean For Security OperationsApr 18, 2025 am 11:36 AM

The result? Burnout, inefficiency, and a widening gap between detection and action. None of this should come as a shock to anyone who works in cybersecurity. The promise of agentic AI has emerged as a potential turning point, though. This new class

Google Versus OpenAI: The AI Fight For StudentsGoogle Versus OpenAI: The AI Fight For StudentsApr 18, 2025 am 11:31 AM

Immediate Impact versus Long-Term Partnership? Two weeks ago OpenAI stepped forward with a powerful short-term offer, granting U.S. and Canadian college students free access to ChatGPT Plus through the end of May 2025. This tool includes GPT‑4o, an a

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
Will R.E.P.O. Have Crossplay?
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools