AI is changing the world around us at the speed of light – the gulf between what was possible at the beginning of 2023 and at the end of the year demonstrates how quickly a technology can progress. As with any new technology, the new opportunities come with risks that have to be acknowledged and actively managed. A robust framework for the responsible development and use of AI will ultimately lead to faster innovation and broader, safer adoption in our society.
This morning, the National Institute for Standards and Technology (NIST) at the U.S. Department of Commerce officially established the U.S. Artificial Intelligence Safety Institute Consortium (AISIC), which aims to support the creation of safe and trustworthy artificial intelligence (AI) systems. Sonar is honored to participate in this effort and excited to join other leaders at the forefront of AI development.
AISIC will bring together the largest collection of AI developers, users, researchers, and affected groups in the world. I believe that this step – made in coordination with the world’s leading technology companies and AI innovators – is a strong move toward establishing a sustainable foundation for the development of AI technologies.
As the world’s leading Clean Code company, we believe that ensuring the quality and security of software code must be a critical part of any comprehensive framework established to safeguard the responsible development and use of AI. The way we write code has already changed, with the majority of developers experimenting or using AI coding assistants. As many developers have experienced, and an increasing volume of academic research has confirmed, code generated by AI often includes bugs and errors, and readability, maintainability, and security issues.
With AI, teams build and iterate quicker, and solve problems faster – they can now produce code, content, and collateral at a pace and cost that would have been completely unfathomable just a few years ago. For AI to reach its full potential and positively impact the lives of billions of people, we as a tech community must fulfill our societal responsibility to put in place a strong framework for how we build products and how we deliver services - one that helps identify and manage the risks involved, while creating space for innovation and experimentation.
With more than 7 million developers using Sonar solutions (SonarLint, SonarQube, and SonarCloud), we [at Sonar] have the expertise to help address the unique challenges and risks associated with AI code generation in the software development lifecycle. This knowledge is particularly relevant in the context of AISIC's goal of developing a scalable and proven model for the safe development and use of AI. We look forward to collaborating with members of AISIC and other thought leaders to advance the development of responsible AI.
Additional information about the Consortium can be found here.