top of page

Standards, frameworks, and legislation for artificial intelligence (AI) transparency

Brady Lund, Zeynep Orhan, Nishith Reddy Mannuru, Ravi Varma Kumar Bevara, Brett Porter, Meka Kasi Vinaih, and Padmapadanand Bhaskara 

Abstract

The global landscape of transparency standards, frameworks, and legislation for artificial intelligence (AI) shows an increasing focus on building trust, accountability, and ethical deployment. This paper presents comparative analysis of key frameworks for AI transparency, such as the IEEE P7001 standard and the CLeAR Documentation Framework, highlighting how regions like the United States, European Union, China, and Japan are addressing the need for transparent and trustworthy AI systems. Common themes across these standards include the need for tiered transparency levels based on system risk and impact, continuous documentation updates throughout the development and revision processes, and the production of explanations tailored to various stakeholder groups. Several key challenges arise in the development of AI transparency standards, frameworks, and legislation, including balancing transparency with privacy, ensuring intellectual property rights, and addressing security concerns. Promoting adaptable, sector-specific transparency regulatory structures is critical in the development of frameworks flexible enough to keep pace with AI’s rapid technological advancement. These insights contribute to a growing body of literature on how best to develop transparency regulatory structures that not only build trust in AI but also support innovation across industries.

Full-Text:

References

  1. Artificial Intelligence Research, Innovation, and Accountability Act of 2023, S. 3312 (2024)

  2. Attard-Frost, B., De los Ríos, A., Walters, D.R.: The ethics of AI business practices: a review of 47 AI ethics guidelines. AI Ethics 3(2), 389–406 (2023)

    Article Google Scholar 

  3. Australian Government: Artificial Intelligence Ethics Framework (2021). Retrieved from https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework

  4. Balasubramaniam, N., Kauppinen, M., Rannisto, A., Hiekkanen, K., Kujala, S.: Transparency and explainability of AI systems: from ethical guidelines to requirements. Inf. Softw. Technol. 159, article 107197 (2023)

  5. Bareis, J., Katzenbach, C.: Talking AI into being: the narratives and imaginaries of national AI strategies and their performative politics. Sci. Technol. Hum. Values 47(5), 855–881 (2022)

    Article Google Scholar 

  6. Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: can language models be too big? In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623 (2021)

  7. Bhalla, N., Brooks, L., Leach, T.: Ensuring a ‘Responsible’ AI future in India: RRI as an approach for identifying the ethical challenges from an Indian perspective. AI Ethics 4, 1409–1422 (2023)

    Article Google Scholar 

  8. Binns, R.: Fairness in machine learning: lessons from political philosophy. In: Proceedings of the 1st Conference on Fairness, Accountability and Transparency, pp. 149–159. Association for Computing Machinery (2018)

  9. Booker, C.: U.S. Senate Introduces the Algorithmic Accountability Act (2023). https://www.booker.senate.gov

  10. Borenstein, J., Howard, A.: Emerging challenges in AI and the need for AI ethics education. AI Ethics 1, 61–65 (2021)

    Article MATH Google Scholar 

  11. Caldwell, B., Cooper, M., Reid, L.G., Vanderheiden, G., Chisholm, W., Slatin, J., White, J.: Web content accessibility guidelines (WCAG) 2.0. In: WWW Consortium (W3C), 290(1–34), pp. 5–12 (2008)

  12. Castelvecchi, D.: Can we open the black box of AI? Nat. News 538(7623), 20–23 (2016)

    Article Google Scholar 

  13. Chmielinski, K., Newman, S., Kranzinger, C.N., Hind, M., Vaughan, J.W., Mitchell, M., et al.: The CLeAR Documentation Framework for AI Transparency. Shorenstein Center on Media, Politics and Public Policy (2024). https://shorensteincenter.org/clear-documentation-framework-AI-transparency-recommendations-practitioners-context-policymakers/

  14. Coeckelbergh, M.: AI ethics. The MIT Press (2020)

    Book MATH Google Scholar 

  15. Daneshjou, R., Smith, M.P., Sun, M.D., Rotemberg, V., Zou, J.: Lack of transparency and potential bias in artificial intelligence data sets and algorithms: a scoping review. JAMA Dermatol. 157(11), 1362–1369 (2021)

    Article MATH Google Scholar 

  16. Dey, A., Cyrill, M.: India’s regulation of AI and large language models (2024). India Briefing. https://www.india-briefing.com/news/india-regulation-of-ai-and-large-language-models-31680.html/

  17. Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 275–285. Association for Computing Machinery (2019)

  18. Commission, E.U.: Proposal for a regulation laying down harmonised rules on artificial intelligence. Brussels 21, 2021 (2021)

    MATH Google Scholar 

  19. European Parliament: Artificial Intelligence Act: MEPs adopt landmark law (2024). https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law

  20. Everson, J., Smith, J., Marchesini, K., Tripathi, M.: A regulation to promote repsonsible AI in health care. Health Aff. (2024). https://doi.org/10.1377/forefront.20240223.953299

    Article Google Scholar 

  21. Felzmann, H., Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A.: Towards transparency by design for artificial intelligence. Sci. Eng. Ethics 26(6), 3333–3361 (2020)

    Article Google Scholar 

  22. Fernandez-Quilez, A.: Deep learning in radiology: ethics of data and on the value of algorithm transparency, interpretability and explainability. AI Ethics 3(1), 257–265 (2023)

    Article Google Scholar 

  23. Goodall, N.J.: Machine ethics and automated vehicles. In: Meyers, G., Beiker, S., Road Vehicle Automation, pp. 93–102. Springer (2014)

  24. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J.W., Wallach, H., Daumé, H., III., Crawford, K.: Datasheets for datasets. Commun. ACM 64(12), 86–92 (2021)

    Article Google Scholar 

  25. Government of Canada: Directive on Automated Decision-Making (2021). Retrieved from https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592

  26. Hickman, T., Zaidi, Z., Mair, D.: AI Watch: Global regulatory tracker—OECD (2024). White & Case. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-oecd

  27. Hickok, M.: Lessons learned from AI ethics principles for future actions. AI Ethics 1(1), 41–47 (2021)

    Article MATH Google Scholar 

  28. Hind, M., Houde, S., Martino, J., Mojsilovic, A., Piorkowski, D., Richards, J., Mojsilović, A.: Experiences with improving the transparency of AI models and services. In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–8. Association for Computing Machinery (2020)

  29. Holland, S., Hosny, A., Newman, S., Joseph, J., Chmielinski, K.: The Dataset Nutrition Label: A Framework to Drive Higher Data Quality Standards. arXiv preprint arXiv:1805.03677 (2018)

  30. Hutchinson, B., Smart, A., Hanna, A., Denton, E., Greer, C., Kjartansson, O., et al.: Towards accountability for machine learning datasets: practices from software engineering and infrastructure. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 560–575. Association for Computing Machinery (2021)

  31. IEEE Standards Association: IEEE Standard for Transparency of Autonomous Systems, pp.1–54. IEEE Std, 7001-2021 (2022)

  32. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1(9), 389–399 (2019)

    Article Google Scholar 

  33. Kamiya, M., Keate, J.: AI Watch: Global regulatory tracker—Japan. White & Case (2024). https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-japan

  34. Kazim, E., Koshiyama, A.: The interrelation between data and AI ethics in the context of impact assessments. AI Ethics 1, 219–225 (2021)

    Article MATH Google Scholar 

  35. Larsson, S., Heintz, F.: Transparency in artificial intelligence. Internet Policy Rev. 9(2), 1–16 (2020)

    Article MATH Google Scholar 

  36. Lauer, D.: You cannot have AI ethics without ethics. AI Ethics 1, 21–25 (2020)

    Article Google Scholar 

  37. Lauw, N., Ching, P.F., Cheng, A.: Part 4—AI Regulation in Asia (2024). RPC. https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-4-ai-regulation-in-asia

  38. Lund, B.D., Wang, T., Mannuru, N.R., Nie, B., Shimray, S., Wang, Z.: ChatGPT and a new academic reality: artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. J. Am. Soc. Inf. Sci. 74(5), 570–581 (2023)

    Google Scholar 

  39. Luong, N.: China’s AI governance: Engaging the global South (2024). National Bureau of Asian Research. https://www.nbr.org/publication/chinas-ai-governance-engaging-the-global-south/

  40. Madaio, M.A., Stark, L., Wortman Vaughan, J., Wallach, H.: Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14. Association for Computing Machinery (2020)

  41. Mannuru, N.R., Shahriar, S., Teel, Z.A., Wang, T., Lund, B.D., Tijani, S., et al.: Artificial intelligence in developing countries: the impact of generative artificial intelligence (AI) technologies for development. Inf. Dev. (2023). https://doi.org/10.1177/02666669231200628

  42. Memarian, B., Doleck, T.: Fairness, accountability, transparency, and ethics (FATE) in artificial intelligence (AI) and higher education: a systematic review. Comput. Educ. Artif. Intell. 5, article 100152 (2023)

  43. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article MathSciNet MATH Google Scholar 

  44. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Gebru, T.: Model cards for model reporting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 220–229. Association for Computing Machinery (2019)

  45. Morley, J., Floridi, L., Kinsey, L., Elhalal, A.: From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci. Eng. Ethics 26(4), 2141–2168 (2020)

    Article Google Scholar 

  46. National Artificial Intelligence Advisory Committee: Towards standards for data transparency for AI models (2024). https://ai.gov/wp-content/uploads/2024/06/PROCEEDINGS_Towards-Standards-for-Data-Transparency-for-AI-Models.pdf

  47. Ng, A.: Written Statement of Andrew Ng Before the U.S. Senate AI Insight Forum, December 11, 2023 (2023). https://aifund.ai/insights-written-statement-of-andrew-ng-before-the-u-s-senate-ai-insight-forum/

  48. Okolo, C.T.: Reforming data regulation to advance AI governance in Africa (2024). Brookings. https://www.brookings.edu/articles/reforming-data-regulation-to-advance-ai-governance-in-africa

  49. Pagallo, U.: The legal challenges of big data: putting secondary rules first in the field of EU data protection. Eur. Data Prot. Law Rev. 3, 36 (2017)

    Article MATH Google Scholar 

  50. Pushkarna, M., Zaldivar, A., Kjartansson, O.: Data cards: purposeful and transparent dataset documentation for responsible AI. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 1776–1826. Association for Computing Machinery (2022)

  51. Quinn, M., Piper, B., Bliss, J.P., Keever, D.: Recommended methods for using the 2020 NIST principles for ai explainability. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 2034–2037. IEEE (2020)

  52. Reinhardt, K.: Trust and trustworthiness in AI ethics. AI Ethics 3(3), 735–744 (2023)

    Article MATH Google Scholar 

  53. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. Association for Computing Machinery (2016)

  54. Richards, J., Piorkowski, D., Hind, M., Houde, S., Mojsilović, A.: A Methodology for Creating AI FactSheets (2020). arXiv preprint arXiv:2006.13796

  55. Ridley, M.: Explainable artificial intelligence (XAI): adoption and advocacy. Inf. Technol. Libr. 41(2), 1–17 (2022)

    MATH Google Scholar 

  56. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article MATH Google Scholar 

  57. Ruggeri, A.: Davos 2024: Can—and should—leaders aim to regulate AI directly? World Economic Forum (2024). https://www.bbc.com/worklife/article/20240118-davos-2024-can-and-should-leaders-aim-to-regulate-ai-directly

  58. Schiff, D.: Education for AI, not AI for education: the role of education and ethics in national AI policy strategies. Int. J. Artif. Intell. Educ. 32, 527–563 (2022)

    Article MATH Google Scholar 

  59. Shin, D.: Toward fair, accountable, and transparent algorithms: Case studies on algorithm initiatives in Korea and China. J. Eur. Inst. Commun. Cult. 26(3), 274–290 (2019)

    MATH Google Scholar 

  60. Srinivasan, R., Ghosh, D.: A new social contract for technology. Policy Internet 15(1), 117–132 (2023)

    Article MATH Google Scholar 

  61. Stoyanovich, J., Howe, B.: Nutritional labels for data and models. IEEE Data Eng. Bull. 42(3), 13–23 (2019)

    MATH Google Scholar 

  62. Swaminathan, N., Danks, D.: Application of the NIST AI Risk Management Framework to Surveillance Technology (2024). arXiv preprint arXiv:2403.15646

  63. Theodorou, A., Wortham, R.H., Bryson, J.J.: Designing and implementing transparency for real time inspection of autonomous robots. Connect. Sci. 29(3), 230–241 (2017)

    Article MATH Google Scholar 

  64. Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., Floridi, L.: The ethics of algorithms: key problems and solutions. AI Soc. 37, 215–230 (2021)

    Article Google Scholar 

  65. Von Eschenbach, W.J.: Transparency and the black box problem: why we do not trust AI. Philos. Technol. 34(4), 1607–1622 (2021)

    Article MATH Google Scholar 

  66. Wachter, S., Mittelstadt, B., Floridi, L.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Privacy Law 7(2), 76–99 (2017)

    Article MATH Google Scholar 

  67. Wang, Q., Li, R., He, G.: Research status of nuclear power: a review. Renew. Sustain. Energy Rev. 90, 90–96 (2018)

    Article MATH Google Scholar 

  68. Werner, J.: Russia updates national AI strategy (2024). Babl AI. https://babl.ai/russia-updates-national-ai-strategy/

  69. Winecoff, A.A., Bogen, M.: Improving governance outcomes through AI documentation: Bridging theory and practice (2024). arXiv preprint arXiv:2409.08960

  70. Winfield, A.F., Booth, S., Dennis, L.A., Egawa, T., Hastie, H., Jacobs, N., et al.: IEEE P7001: a proposed standard on transparency. Front. Robot. AI 8, 665729 (2021)

    Article Google Scholar 

  71. Wulf, A.J., Seizov, O.: Artificial intelligence and transparency: a blueprint for improving the regulation of AI applications in the EU. Eur. Bus. Law Rev. 31(4), 611–640 (2020)

    Article MATH Google Scholar 

  72. Yekaterina, K.: Challenges and opportunities for AI in healthcare. Int. J. Law Policy 2(7), 11–15 (2024)

    Article MATH Google Scholar 

bradylund

©2024 by Brady Lund. 

bottom of page