Artificial intelligence is driving markets, conversations, and innovation at a breakneck pace. It took ChatGPT a record-breaking five days to reach one million users; that compares with 2.5 months for Instagram and 3.5 years for Netflix.
But not everyone is equally keen to go full steam ahead. In an open letter released in March and signed by the likes of Elon Musk and Steve Wozniak, technology leaders urged thoughtfulness and caution.
The jury is still out on whether the technological leap that AI represents will propel humanity to greater heights or create unprecedented new risks. The most likely answer is “both,” creating a profound moment for industry, commerce, society, and government.
This period of uncertainty creates an opportunity for the business community to share knowledge and help frame the future of AI development.
State of the State: A Legislative Vacuum
Legislators are aware of the need to create regulatory frameworks around AI development, but so far progress in the U.S. has been slow. In Senate Majority Leader Chuck Schumer’s June proposal, Safe Innovation in the AI Age, he likened the technological impact of AI today to “what the locomotive and electricity did for human muscle a century and a half ago.” He emphasized the need for security and accountability, protecting workers and intellectual property while promoting transparency. He also addressed the role of U.S. innovation in AI at a global level, and how American leadership will support democratic thought.
The striking feature about Sen. Schumer’s proposal is not what he said, but what he didn’t say.
Sen. Schumer didn’t propose any specific legislation of AI, instead citing a need for more education and information that includes a series of AI Insight Forums in the fall to help educate Congress on the technology. While education is a necessary next step, the pace of Capitol Hill isn’t keeping up with a technology marked by its exponential adoption. Industry leaders in technology have warned that unchecked growth represents a substantial threat to economic stability, innovation, and consumer safety. And the country is already behind: in contrast to other major AI players, the U.S. currently has no federal legislation explicitly addressing the potential harms of AI.
There are signs that the legislative vacuum may be filled by regulatory agencies instead of legislation. U.S. Securities and Exchange Commission Chairman Gary Gensler said in a speech on July 17 that he has instructed agency staff to develop proposed rules to guard against potential market risks posed by AI. The next day, Michael Barr, the Federal Reserve’s vice chair for supervision, highlighted concerns about the threat AI may pose to fair lending.
An Opportunity for Businesses to Step Up
The education gap on Capitol Hill and the resulting lack of regulation creates an opportunity for businesses in the U.S. to advance safe AI frameworks in a more fulsome capacity. Edelman Trust Barometer research already suggests businesses are the only trusted institution among media, government, and NGOs. As the AI Insight Forums approach, businesses can consider how they’re communicating their approach to AI innovation with stakeholders, legislators, and consumers. An appropriate communications strategy includes:
Collaborate with Peers: Businesses that work directly with AI are in a strong position to advise on its use and misuse, especially compared with many legislators who may have limited, if any, knowledge of the technology. Businesses can translate this direct experience into clear development frameworks that communicate how each company is balancing the benefits and risks of safety and innovation. Because Sen. Schumer’s proposed legislation doesn’t offer solutions or concrete frameworks for education and development in AI, there is an opportunity for businesses to develop a shared ethical and innovative approach that goes a long way toward advancing progress.
Commit to Transparency: The confusion around AI has polarized public sentiment, with proponents arguing for its potential to positively impact society and detractors cautioning against the steep risks. Businesses must communicate with both audiences in order to succeed; this requires a commitment to transparency around AI development and a proactive approach to communications. In this context, businesses need to weigh the costs of building AI products in public view against the need to protect intellectual property.
Create a Trust Feedback Loop: AI success will ultimately be a function of stakeholder trust. While open, collaborative development and transparency will go part of the way toward building trust, these efforts are wasted if communications is a one-way street. Businesses therefore need to establish two-way communications channels (e.g., social media) that measure and respond to stakeholder sentiment regarding AI development efforts in real time. Having a finger on the public’s pulse not only advances business goals, but it also supports closing the education gap for legislators.
Right now, businesses involved in AI development have a unique, limited opportunity to help educate the public and legislators on how to most effectively integrate industry best practices for safety and innovation into AI regulation. The direction of U.S. global leadership in AI is being decided in real time, with collaboration, transparency and communication at the center of the debate.
Jadis Armbruster, Senior Account Supervisor, Financial Communications & Capital Markets