• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
  • Home
  • /Publications
  • /Tech News
  • /Research
  • Home
  • / ...
  • /Tech News
  • /Research

AI Is Everywhere—Can Standards Catch Up?

By IEEE Computer Society Team on
February 3, 2025
laptop with graphicdefining high-risk use environments, risk mitigation, and prevention measures;
  • determining how to allocate responsibility to different actors in the AI lifecycle—including developers, deployers, and end users;
  • greater emphasis on safety, health, and human rights; and
  • integration of sustainability considerations throughout the AI lifecycle.
  • Enterprise Generative AI Summit in San Jose, CaliforniaEthical biases help applications serve the user’s goal. Search engine results, for example, are biased based on the user’s query, which helps the application deliver useful results.
  • Unethical biases produce results that are unfair and harmful to users and society. Examples here include facial recognition systems that routinely misidentify people of color and hiring algorithms that favor male candidates.
  • As the authors note, the distinctions between ethical and unethical biases “hinge on the bias’s purpose, context, and impact on stakeholders” and thus make human oversight essential.

    The SMART criteria updates and notes reflect these issues and include

    • developing and using mechanisms to ensure that AI system biases are beneficial;
    • ensuring that goals, inhibitors, and foundational ethical requirements cover the full spectrum of the AI system life cycle, from development through decommission.
    • expanding the goals for governance, human oversight, and risk and knowledge management throughout the system life cycle.

    Transparency


    Ethical transparency is essential to ensuring accountability and to identifying and addressing biases. This transparency entails a visible decision-making process for AI systems that is understandable to users and fosters trust.

    The SMART criteria updates and notes reflect these issues and include

    • emphasizing the importance of human judgment in determining appropriate transparency levels;
    • recognizing that transparency issues can arise throughout an AI system’s lifecycle;
    • enhancing data oversight, knowledge governance, and risk management.

    Privacy


    The existing SMART privacy criteria focuses on established legal concepts—including the right to confidentiality and to data privacy, protection, and security. It also emphasizes the context and culture in which an AI system is used.

    The SMART criteria review took a more holistic view of ethical privacy, understanding its intrinsic link to an individual’s self-expression, personhood, ethics, values, and personal safety and security.

    The SMART criteria updates and notes reflect these issues and include

    • acknowledging the complex interplay between technological innovation and the diverse, often deeply personal, aspects of privacy;
    • recognizing that privacy issues cover the entire AI system lifecycle, including points such as after AI modifications, after decommissioning a service, at the end of a contractual relationship, and following a person’s death; and
    • privacy protections gain depth when considered in conjunction with ethical transparency, accountability, and algorithmic bias.

    Dig Deeper


    As the authors of this important AI ethics update note, if the polarization of AI accelerationists vs. de-accelerationists offers a credible measure, AI system harms and benefits are neither fully mapped nor likely to be equitably shared between the developers/service providers and the societies across the globe consuming these exponentially growing and evolving AI products.

    “Artificial Intelligence For the Benefit of Everyone” discusses this and other ethics-related issues in depth; it also provides details on each of the four key workstream areas. To dig even deeper, join other AI experts, researchers, government officials, and enthusiasts at the international IEEE Conference on Artificial Intelligence (IEEE CAI) 5–7 May 2025 in Santa Clara, California.

    In addition to showcasing the latest AI research and breakthroughs, IEEE CAI emphasizes applications and key subject areas, from sustainability and human-centered AI to issues and industry-specific applications in healthcare, transportation, and engineering and manufacturing.

    LATEST NEWS