Skip to content

The official GitHub page for the paper "Who is Responsible? The Data, Models, Users or Regulations? Responsible Generative AI for a Sustainable Future"

Notifications You must be signed in to change notification settings

anas-zafar/Responsible-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

74 Commits
 
 
 
 
 
 
 
 

Repository files navigation

"Who is Responsible? The Data, Models, Users or Regulations? A Comprehensive Survey on Responsible Generative AI for a Sustainable Future"

🌟 Overview

Who is Responsible? The Data, Models, Users or Regulations? Responsible Generative AI for a Sustainable Future

Welcome to the our Responsible AI survey paper repository. This extensive collection includes papers and developments in the Responsible AI domain, documenting the field's evolution from foundational concepts to implemented regulatory frameworks.

What is Responsible AI?

Responsible Artificial Intelligence (RAI) encompasses the principles, practices, and frameworks governing the design, deployment, and operation of AI systems to ensure they function in accordance with ethical standards, maintain transparency in their decision-making processes, demonstrate clear accountability mechanisms, and fundamentally align with societal values and human welfare objectives.

📊 Evolution Summary (2020-2025)

Key Trends Across the Period:

  • Research Growth: RAI papers at leading AI conferences increased by 28.8% from 2023 to 2024 alone (992 to 1,278 papers)
  • Industry Adoption: AI usage in organizations grew from 55% (early 2023) to 78% (2024)
  • Regulatory Maturation: Evolution from voluntary guidelines (2020) to mandatory frameworks like the EU AI Act (2024)
  • Environmental Awareness: Major tech companies reporting 29-48% emission increases due to AI workloads
  • Technical Advances: From basic interpretability to sophisticated reasoning methods and human-validated approaches

This repository provides comprehensive coverage across:

  • Explainable AI [XAI] - Evolution from foundational concepts to human-validated methods
  • Ethical Considerations & Bias Mitigation - From awareness to systematic auditing frameworks
  • AI Governance & Regulatory Frameworks - From proposals to implemented legislation
  • Environmental Sustainability - From early concerns to comprehensive impact studies
  • Sector Applications - Across healthcare, finance, education, and beyond
  • Advanced Models & Safety - Foundation models and generative AI governance
  • Privacy and Security - Evolution of privacy-preserving techniques
  • Human-AI Interaction - Trust, explainability, and user validation

1. Comprehensive RAI Surveys (2020-2025)

Latest Comprehensive Surveys (2024-2025)

  • [2025] The 2025 AI Index Report Stanford HAI [report]
  • [2024] Recent Applications of Explainable AI (XAI): A Systematic Literature Review Applied Sciences [paper]
  • [2024] Policy advice and best practices on bias and fairness in AI Ethics and Information Technology [paper]
  • [2024] Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies Sci [paper]
  • [2024] AI Fairness in Data Management and Analytics: A Review on Challenges, Methodologies and Applications Applied Sciences [paper]
  • [2024] Evaluating privacy, security, and trust perceptions in conversational AI: A systematic review A Leschanowsky, S Rech, B Popp, T Bäckström [paper]
  • [2024] Fairness in Machine Learning: A Survey. Simon Caton and Christian Haas [paper]
  • [2024/10] Responsible AI in the Global Context: Maturity Model and Survey Anka Reuel,Patrick Connolly [paper]
  • [2024/4] Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering Qinghua Lu, Liming Zhu [paper]
  • [2024] Human-centered evaluation of explainable AI applications: a systematic review Frontiers in AI [paper]

Foundational Surveys (2020-2023)

  • [2023] Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities Knowledge-Based Systems [paper]
  • [2023] An Empirical Survey on Explainable AI Technologies: Recent Trends, Use-Cases, and Categories Electronics [paper]
  • [2023] Survey on Explainable AI: From Approaches, Limitations and Applications Aspects Human-Centric Intelligent Systems [paper]
  • [2023] Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction ScienceDirect [paper]
  • [2023] AI governance: themes, knowledge gaps and future agendas Emerald Insight [paper]
  • [2022] Privacy Governance Report IAPP [report]
  • [2021] A Survey on Bias and Fairness in Machine Learning arXiv [paper]
  • [2021] Explainable AI: A Review of Machine Learning Interpretability Methods Entropy [paper]
  • [2020] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI Information Fusion [paper]

Healthcare & Medical Applications

  • [2024] A survey of recent methods for addressing AI fairness and bias in biomedicine ScienceDirect [paper]
  • [2024] Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer's disease detection PMC [paper]
  • [2024] Ethical and Bias Considerations in Artificial Intelligence/Machine Learning ScienceDirect [paper]
  • [2024] Fairness of artificial intelligence in healthcare: review and recommendations PMC [paper]

Ethics & Governance Focus

  • [2024] AI Governance in a Complex and Rapidly Changing Regulatory Landscape: A Global Perspective Nature [paper]
  • [2023] Ethics and discrimination in artificial intelligence-enabled recruitment practices Nature [paper]
  • [2023] AI Regulations: Prepare for More AI Rules on Privacy Rights, Data Protection, and Fairness TrustArc [analysis]

2. Comprehensive RAI Papers (2020-2025)

2.1 Explainable AI & Interpretability

Recent Advances in XAI Methods (2024-2025)

  • [2025] A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME Advanced Intelligent Systems [paper]
  • [2025] Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models MDPI Applied Sciences [paper]
  • [2025] Fewer Than 1% of Explainable AI Papers Validate Explainability with Humans arXiv [paper]
  • [2024] Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer's disease detection Brain Informatics [paper]

Foundational XAI Works (2020-2023)

  • [2023] Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities Knowledge-Based Systems [paper]
  • [2023] An Empirical Survey on Explainable AI Technologies: Recent Trends, Use-Cases, and Categories Electronics [paper]
  • [2021] Explainable AI: A Review of Machine Learning Interpretability Methods Entropy [paper]
  • [2020] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI Information Fusion [paper]

Continuing Core Methods

  • Focus! Rating XAI Methods and Finding Biases Anna Arias-Duart, Ferran Parés[paper]
  • A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When? * Subrato Bharati, M. Rubaiyat Hossain Mondal*[paper]
  • Efficient data representation by selecting prototypes with importance weights Gurumoorthy, K. S., Dhurandhar[paper]
  • TED: Teaching AI to explain its decisions Hind, M., Wei, D., Campbell[paper]
  • A unified approach to interpreting model predictions Lundberg, S. M., & Lee[paper]
  • Leveraging latent features for local explanations Luss, R., Chen, P. Y[paper]
  • Contrastive Explanations Method with Monotonic Attribute Functions Luss et al[paper]
  • Boolean Decision Rules via Column Generation Dash et al[paper]
  • Explainable AI (XAI): Core ideas, techniques, and solutions R Dwivedi, D Dave[paper]

2.2 AI Ethics, Fairness & Bias Mitigation

Recent Ethical Frameworks & Guidelines (2024-2025)

  • [2025] AI Ethics: Integrating Transparency, Fairness, and Privacy in AI Development Taylor & Francis [paper]
  • [2024] Navigating algorithm bias in AI: ensuring fairness and trust in Africa Frontiers [paper]
  • [2025] Artificial intelligence bias auditing – current approaches, challenges and lessons from practice Emerald Insight [paper]
  • [2024] Policy advice and best practices on bias and fairness in AI Ethics and Information Technology [paper]

Foundational Bias & Fairness Research (2020-2023)

  • [2023] Ethics and discrimination in artificial intelligence-enabled recruitment practices Nature [paper]
  • [2023] AI Fairness in Data Management and Analytics: A Review on Challenges, Methodologies and Applications Applied Sciences [paper]
  • [2021] A Survey on Bias and Fairness in Machine Learning ACM Computing Surveys [paper]
  • [2020] Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare NPJ Digital Medicine [paper]

Bias Detection & Mitigation Techniques

  • Mitigating bias in artificial intelligence: Fair data generation via causal models for transparent and explainable decision-making ScienceDirect [paper]
  • Fairness through Experimentation: Inequality in A/B testing as an approach to responsible design Saint-Jacques, G., Sepehri, A., Li, N., & Perisic, I. [paper]
  • Socially Responsible AI Algorithms: Issues, Purposes, and Challenges L Cheng, KR Varshney[paper]
  • Fairness in Machine Learning: A Survey Simon Caton, Christian Haas[paper]

2.3 AI Governance & Regulatory Frameworks

Major Regulatory Developments (2024-2025)

  • [2024] EU AI Act Implementation European Commission [official]
  • [2024] NIST AI Risk Management Framework Updates NIST [framework]
  • [2025] Use ISO 42001 & NIST AI RMF to Help with the EU AI Act CSA [guide]
  • [2024] AI governance trends: How regulation, collaboration, and skills demand are shaping the industry World Economic Forum [analysis]

Foundational Governance Research (2020-2023)

  • [2023] AI governance: themes, knowledge gaps and future agendas Emerald Insight [paper]
  • [2023] AI Regulations: Prepare for More AI Rules on Privacy Rights, Data Protection, and Fairness TrustArc [analysis]
  • [2022] OECD AI Principles and Privacy Guidelines OECD [report]
  • [2021] The Blueprint for an AI Bill of Rights White House [framework]

Comparative Regulatory Analysis

  • [2024] The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment Brookings [analysis]
  • [2025] Navigating AI Compliance: An Integrated Approach to the NIST AI RMF & EU AI Act Securiti [whitepaper]
  • [2024] AI Governance in a Complex and Rapidly Changing Regulatory Landscape: A Global Perspective Nature [paper]

2.4 Environmental Sustainability & Green AI

Carbon Footprint & Environmental Impact Studies (2024-2025)

  • [2024] The Climate and Sustainability Implications of Generative AI MIT Climate & Sustainability Consortium [paper]
  • [2025] Explained: Generative AI's environmental impact MIT News [article]
  • [2025] Environmental Impact of Generative AI | Stats & Facts for 2025 The Sustainable Agency [report]
  • [2024] Impacts of generative AI on sustainability PwC [analysis]
  • [2024] AI brings soaring emissions for Google and Microsoft, a major contributor to climate change NPR [report]

Foundational Environmental Research (2020-2023)

  • [2023] Energy and policy considerations for deep learning in NLP Strubell, E., Ganesh, A., & McCallum, A. [paper]
  • [2022] Sustainable AI: Environmental implications, challenges, and opportunities Wu, C.-J., et al. [paper]
  • [2021] Sustainable AI: AI for sustainability and the sustainability of AI van Wynsberghe, A. [paper]
  • [2021] Carbon emissions and large neural network training Patterson, D., et al. [paper]
  • [2020] Green Algorithms: Quantifying the carbon emissions of computation Lannelongue, L. et al. [paper]

Sustainable AI Practices

  • Making AI Less "Thirsty": Uncovering and Addressing the Secret Water Footprint of AI Models Li, P., Yang, J., Islam, M. A., & Ren, S. [paper]
  • The energy and carbon footprint of training end-to-end speech recognizers Parcollet, T., & Ravanelli, M. [paper]
  • Quantifying the carbon emissions of machine learning Lacoste, A., et al. [paper]

2.5 Privacy & Security in AI

Latest Privacy Frameworks (2024-2025)

  • [2024-2025] AI and Privacy: Shifting from 2024 to 2025 CSA [analysis]
  • [2024] Privacy Governance Report IAPP [report]
  • [2023] Ted-spad: Temporal distinctiveness for self-supervised privacy-preservation for video anomaly detection J Fioresi, IR Dave, M Shah [paper]

Foundational Privacy Research (2020-2023)

  • [2022] Data cards: Purposeful and transparent dataset documentation for responsible AI Pushkarna, M., Zaldivar, A., & Kjartansson, O. [paper]
  • [2022] Healthsheet: Development of a transparency artifact for health datasets Rostamzadeh, N., et al. [paper]
  • [2020] Privacy-Preserving Machine Learning [book]

Security & Adversarial AI

  • [2024] Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations NIST [paper]
  • [2024] EU AI Act, US NIST Target Cyberattacks on AI Systems Morgan Lewis [analysis]

2.6 Advanced Models & Foundation Model Governance

Foundation Model Transparency & Evaluation (2023-2025)

  • [2024] FAIR Enough: Develop and Assess a FAIR-Compliant Dataset for Large Language Model Training? S Raza, et al. [paper]
  • [2024] Exploring Memorization and Copyright Violation in Frontier LLMs: A Study of the New York Times v. OpenAI 2023 Lawsuit J Freeman, et al. [paper]

Generative AI Governance

  • [2024] Generative AI in Creative Practice: ML-Artist Folk Theories of T2I Use, Harm, and Harm-Reduction Renee Shelby, et al. [paper]
  • [2024] Evolving Generative AI: Entangling the Accountability Relationship Marc T.J Elliott, Deepak P [paper]
  • [2024] Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective Mousa Al-kfairy, Dheya Mustafa [paper]

2.7 Technical Debt & System Reliability

ML Systems Engineering (2015-2023)

  • [2022] Sustainable AI: Environmental implications, challenges, and opportunities Wu, C.-J., et al. [paper]
  • [2020] Machine learning: The high interest credit card of technical debt Sculley, D., et al. [paper]
  • [2015] Hidden technical debt in machine learning systems Sculley, D., et al. [paper]

2.8 Industry Reports & Practical Implementation

Latest Industry Surveys (2024-2025)

  • [2024] PwC's 2024 US Responsible AI Survey PwC [report]
  • [2025] The State of AI: Global survey McKinsey [survey]
  • [2025] Responsible AI Revisited: Critical Changes and Updates Since Our 2023 Playbook Adnan Masood [analysis]

Foundational Industry Guidance (2020-2023)

  • [2023] Responsible AI (RAI) Games and Ensembles Yash Gupta, et al. [paper]
  • [2022] Toward Responsible Artificial Intelligence in Long-Term Care: A Scoping Review on Practical Approaches Dirk R M Lukkien, et al. [paper]

2.9 Advanced Reasoning & Interpretability Techniques

Recent Advances in AI Reasoning (2022-2024)

  • [2024] Graph of Thoughts: Solving Elaborate Problems with Large Language Models arXiv [paper]
  • [2024] Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation ICML [paper]
  • [2024] Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models ICML [paper]
  • [2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models NeurIPS [paper]
  • [2023] Sparse Autoencoders Find Highly Interpretable Features in Language Models arXiv [paper]
  • [2022] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models NeurIPS [paper]

Knowledge Representation & Structured Explanations

  • [2021] A Survey on Knowledge Graphs: Representation, Acquisition, and Applications IEEE TNNLS [paper]
  • [2018] A Nutritional Label for Rankings SIGMOD [paper]

2.10 Human-AI Interaction & Trust

Human-Centered AI Research (2020-2024)

  • [2024] Human-centered evaluation of explainable AI applications: a systematic review Frontiers in AI [paper]
  • [2025] Fewer Than 1% of Explainable AI Papers Validate Explainability with Humans arXiv [paper]
  • [2022] Human-In-The-Loop Machine Learning: Active Learning and Annotation for Human-Centered AI [book]
  • [2021] Trust in Machine Learning Varshney, K. [book]
  • [2020] Interpretable AI Thampi, A. [book]

Trust and Explainability Frameworks

  • [2022] Interpretable Machine Learning With Python: Learn to Build Interpretable High-Performance Models [book]
  • [2021] AI Fairness Mahoney, T., Varshney, K.R., Hind, M. [book]
  • [2021] Practical Fairness Nielsen, A. [book]
  • [2020] Hands-On Explainable AI (XAI) with Python Rothman, D. [book]
  • [2020] Responsible Machine Learning Hall, P., Gill, N., Cox, B. [book]

2.11 Legal & Regulatory Compliance

Legal Frameworks & Compliance (2020-2025)

  • [2024] AI and the Law Kilroy, K. [book]
  • [2024] Responsible AI Hall, P., Chowdhury, R. [book]
  • [2023] The Chinese approach to artificial intelligence: An analysis of policy, ethics, and regulation Roberts, H., et al. [paper]
  • [2022] NIST Special Publication: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence Schwartz, R., et al. [report]

International Standards & Guidelines

  • [2024] ISO/IEC 42001:2023 - Artificial Intelligence Management Systems [standard]
  • [2023] UNESCO Recommendation on the Ethics of Artificial Intelligence [framework]
  • [2021] OECD AI Principles [principles]
  • [2019] Ethics Guidelines for Trustworthy AI European Commission [guidelines]

3. Datasets & Resources

Latest AI Safety & Governance Datasets

  • AI Risk Database [Link]
  • AI Risk Repository [Link]
  • ARC AGI [Link]
  • Common Corpus [Link]
  • An ImageNet replacement for self-supervised pretraining without humans [Link]
  • Huggingface Data Sets [Link]
  • The Stack [Link]

Evaluation Frameworks Evolution (2020-2025)

  • Foundation Model Transparency Index - Updated May 2024 with average transparency scores increasing from 37% to 58%
  • Hughes Hallucination Evaluation Model - Updated leaderboard for factuality assessment
  • FACTS - New comprehensive evaluation framework (2024)
  • SimpleQA - Recently introduced for straightforward AI evaluation (2024)
  • HELM Safety - New benchmark for safety assessment (2024)
  • AIR-Bench - Latest responsible AI benchmarking tool (2024)
  • HaluEval - Earlier benchmark for truthfulness evaluation (2022)
  • TruthfulQA - Foundational truthfulness assessment (2021)

4. Applications by Sector

📚 Education

  • AI literacy and responsible deployment in educational institutions
  • Student data privacy and algorithmic bias in educational AI tools
  • Automated grading systems and fairness considerations
  • Personalized learning algorithms and equity concerns

🌍 Environmental Sustainability

  • Carbon footprint assessment and mitigation strategies for AI systems
  • Green AI practices and renewable energy integration
  • Water consumption optimization in data centers
  • Lifecycle environmental impact of AI hardware and software

👥 Society

  • Algorithm bias in African contexts and developing nations
  • Digital divide and equitable AI access
  • Social justice implications of AI deployment
  • Community engagement in AI governance decisions

⚖️ Politics & Governance

  • AI-related election misinformation across 10+ countries and social media platforms
  • Democratic governance of AI systems
  • International cooperation on AI standards
  • Public policy development for AI regulation

🩺 Healthcare

  • Clinical decision support system transparency
  • Medical AI bias detection and mitigation
  • Patient data privacy in AI-powered diagnostics
  • FDA approval processes for AI-enabled medical devices (223 approvals in 2023, up from 6 in 2015)

💰 Finance

  • Algorithmic auditing in financial services
  • Fair lending practices and bias prevention
  • Regulatory compliance for financial AI
  • Credit scoring fairness and transparency

🛡️ Defense

  • Ethical considerations in military AI applications
  • Autonomous weapons governance
  • National security AI frameworks
  • International humanitarian law and AI warfare

🎨 Arts and Entertainment

  • Copyright and intellectual property in generative AI
  • Creative AI ethics and artist rights
  • Deepfake detection and media authenticity
  • Fair compensation for creative content used in AI training

5. Detailed Evolution Timeline (2020-2025)

2020: Foundation Year

Key Developments:

  • Publication of seminal XAI survey by Arrieta et al. establishing foundational concepts
  • Early COVID-19 pandemic accelerating AI adoption in healthcare
  • Growing awareness of algorithmic bias in hiring and criminal justice systems
  • Initial privacy frameworks addressing AI-specific concerns

Major Papers:

  • Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
  • Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare
  • Early work on privacy-preserving machine learning techniques

2021: Regulatory Awakening

Key Developments:

  • EU proposes first comprehensive AI regulation (AI Act draft)
  • White House releases Blueprint for an AI Bill of Rights
  • Increased focus on machine learning interpretability methods
  • Growing industry awareness of technical debt in ML systems

Major Papers:

  • Explainable AI: A Review of Machine Learning Interpretability Methods
  • A Survey on Bias and Fairness in Machine Learning
  • Trust in Machine Learning frameworks
  • Sustainable AI: AI for sustainability and the sustainability of AI

2022: Framework Consolidation

Key Developments:

  • NIST releases AI Risk Management Framework (draft)
  • Chain-of-Thought prompting revolutionizes LLM reasoning
  • Privacy governance reports highlight organizational challenges
  • Increased focus on data governance and documentation

Major Papers:

  • Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
  • Data cards: Purposeful and transparent dataset documentation for responsible AI
  • Sustainable AI: Environmental implications, challenges, and opportunities
  • Privacy Governance Report documenting organizational practices

2023: Generative AI Revolution

Key Developments:

  • ChatGPT launch triggers global AI governance discussions
  • Systematic reviews of XAI methods proliferate
  • Ethics in AI-enabled recruitment becomes major concern
  • Tree of Thoughts and advanced reasoning methods emerge

Major Papers:

  • Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities
  • Ethics and discrimination in artificial intelligence-enabled recruitment practices
  • Tree of Thoughts: Deliberate Problem Solving with Large Language Models
  • AI governance: themes, knowledge gaps and future agendas

2024: Regulatory Implementation

Key Developments:

  • EU AI Act enters into force (August 1, 2024)
  • Major tech companies report significant emission increases due to AI
  • 28.8% increase in RAI papers at leading AI conferences
  • Foundation Model Transparency Index shows improvement from 37% to 58%

Major Papers:

  • Recent Applications of Explainable AI (XAI): A Systematic Literature Review
  • The Climate and Sustainability Implications of Generative AI
  • AI Governance in a Complex and Rapidly Changing Regulatory Landscape
  • Policy advice and best practices on bias and fairness in AI

2025: Maturation and Standards

Key Developments:

  • 78% of organizations now use AI in at least one business function
  • Environmental concerns intensify with GPU shipments reaching 3.85 million
  • Human validation of XAI methods remains critically low (<1%)
  • ISO standards for sustainable AI expected

Major Papers:

  • The 2025 AI Index Report
  • A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME
  • Fewer Than 1% of Explainable AI Papers Validate Explainability with Humans
  • Environmental Impact of Generative AI: Latest statistics and mitigation strategies

6. Key Takeaways for 2020-2025 Evolution

Critical Developments Across the Period:

2020-2021: Foundation and Awareness

  1. Conceptual Foundations: Establishment of core XAI taxonomies and ethical frameworks
  2. Early Regulatory Signals: EU AI Act proposal and White House AI Bill of Rights
  3. Healthcare Focus: COVID-19 pandemic accelerating AI adoption with fairness concerns
  4. Technical Debt Recognition: Growing awareness of ML system maintenance challenges

2022-2023: Framework Development and GenAI Emergence

  1. Methodological Advances: Chain-of-Thought prompting and advanced reasoning techniques
  2. Governance Frameworks: NIST AI RMF and comprehensive governance models
  3. GenAI Revolution: ChatGPT launch transforming responsible AI discussions
  4. Systematic Reviews: Proliferation of comprehensive surveys and meta-analyses

2024-2025: Implementation and Maturation

  1. Regulatory Reality: EU AI Act implementation marking shift to mandatory compliance
  2. Environmental Crisis: Major tech companies reporting 29-48% emissions increases due to AI workloads
  3. Industry Adoption: 78% of organizations now using AI, up from 55% in early 2023
  4. Research Growth: Nearly 30% increase in responsible AI research publications

Persistent Challenges Throughout 2020-2025:

  • Human Validation Gap: Fewer than 1% of XAI papers validate explainability with humans
  • Standardization Deficit: Lack of standardized RAI evaluations among major AI developers
  • Environmental Impact: Exponential growth in computational demands and carbon emissions
  • Bias Persistence: Continued algorithmic discrimination across domains despite mitigation efforts

Emerging Solutions and Trends:

  • Technical Innovation: Advanced reasoning methods (CoT, ToT, Graph of Thoughts)
  • Regulatory Harmonization: International cooperation on AI governance standards
  • Industry Maturation: Shift from voluntary guidelines to mandatory compliance frameworks
  • Environmental Awareness: Growing focus on sustainable AI and green computing practices

7. Future Research Directions

Priority Areas for Continued Research:

Technical Innovations Needed:

  1. Human-Validated XAI: Methods that demonstrably improve human understanding and decision-making
  2. Energy-Efficient AI: Algorithms and architectures that dramatically reduce computational requirements
  3. Robust Bias Detection: Automated systems for identifying and mitigating algorithmic discrimination
  4. Federated Governance: Distributed approaches to AI oversight and compliance

Policy and Governance Gaps:

  1. Global Standards Harmonization: Unified international frameworks for AI governance
  2. Real-time Compliance Monitoring: Automated systems for regulatory adherence
  3. Cross-jurisdictional Enforcement: Mechanisms for managing global AI deployment
  4. Stakeholder Engagement: Inclusive processes for affected community participation

Societal Integration Challenges:

  1. Digital Divide: Ensuring equitable access to responsible AI benefits
  2. Cultural Adaptation: Contextualizing AI ethics for diverse global communities
  3. Intergenerational Impact: Long-term consequences of AI deployment decisions
  4. Democratic Participation: Public involvement in AI governance decisions

8. Contributing Authors

  • SHAINA RAZA∗, Vector Institute, Canada
  • RIZWAN QURESHI∗, Center for Research in Computer Vision, The University of Central Florida, USA
  • ANAM ZAHID, Department of Computer Science, Information Technology University of the Punjab, Pakistan
  • JOE FIORESI, Center for Research in Computer Vision, The University of Central Florida, USA
  • FERHAT SADAK, Department of Mechanical Engineering, Bartin University, Türkiye
  • MUHAMMAED SAEED, Saarland University, Germany
  • RANJAN SAPKOTA, Washington State University, USA
  • ADITYA JAIN, University of Texas at Austin, USA
  • ANAS ZAFAR, Fast School of Computing, National University of Computer and Emerging Sciences, Pakistan
  • MUNEEB UL HASSAN, School of Information Technology, Deakin University, Australia
  • AIZAN ZAFAR, Center for research in computer vision, University of Central FLorida, USA
  • HASAN MAQBOOL, Independent Researcher, USA
  • ASHMAL VAYANI, Center for Research in Computer Vision, University of Central Florida, USA
  • JIA WU, MD Anderson Cancer Center, The University of Texas, USA
  • MAGED SHOMAN, University of Tennessee; Oak Ridge National Lab, USA

*Equal contributors

📝 Citation

If you find our survey useful for your research, please cite the following paper:

@article{raza2025responsible,
  title        = {Who is Responsible? The Data, Models, Users or Regulations? Responsible Generative AI for a Sustainable Future},
  author       = {Shaina Raza and Rizwan Qureshi and Anam Zahid and Joseph Fioresi and Ferhat Sadak and Muhammad Saeed and Ranjan Sapkota and Aditya Jain and Anas Zafar and Muneeb Ul Hassan and Aizan Zafar and Hasan Maqbool and Ashmal Vayani and Jia Wu and Maged Shoman},
  journal      = {TechRxiv Preprints},
  year         = {2025},
  publisher    = {TechRxiv},
  doi          = {10.36227/techrxiv.173834932.29831105},
  url          = {https://www.techrxiv.org/doi/full/10.36227/techrxiv.173834932.29831105/v1}
}

Last Updated: July 2025

About

The official GitHub page for the paper "Who is Responsible? The Data, Models, Users or Regulations? Responsible Generative AI for a Sustainable Future"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •