Regulating the Robots

The current SEC proposal is seen as overreaching, but industry watchers see benefits if changes are made.

Art by Klaas Verplancke

 


Institutional asset allocators and investors have long used various types of artificial intelligence in their portfolio management to get a return edge, whether by building models inhouse using machine learning or natural language processing to incorporate unstructured data or by hiring managers who know how to glean new insights from alternative information.

But proposed regulation from the Securities and Exchange Commission may change how these institutions use advanced technology, and some industry watchers fear it could put U.S. institutions at a disadvantage if regulation slows down the use of technology across the board.

A comment letter signed on Aug. 15 by 16 industry groups including the Alternative Investment Management Association and Managed Funds Association, said “the expansive nature of its restrictions would without question have a severely chilling effect on firms’ use of technology.”

Never miss a story — sign up for CIO newsletters to stay up-to-date on the latest institutional investment industry news.

Alvaro Almanza, chief legal officer at Toggle AI, a technology firm specializing in the asset management industry, says because AI has made its way into so much technology already, the SEC’s proposal could easily affect long-standing, existing tools such as Excel or Bloomberg terminals.

If the rule isn’t modified, “you’re essentially telling broker-dealers and investment advisers to go back to using papyrus and quills,” Almanza says.

The proposed SEC predictive data analytics rule is designed to regulate conflicts of interest associated with broker-dealers and investment advisers’ use of artificial intelligence. If adopted, the rules would expand the fiduciary framework from the traditional disclosure-centered approach to conflicts of interest to eliminating and neutralizing conflicts of interest. The rule covers a wide range of technologies, and firms would have to create extensive governance and testing regimes for those technologies.

The proposed rule will likely be changed given the pushback; however, some type of regulation is inevitable. Industry watchers say strengthening existing regulations and getting more clarity about how and where AI obtains data could improve oversight.

Lack of Transparency an Issue

Ashby Monk, executive and research director of the Stanford Research Initiative on Long-Term Investing, says for asset allocators, AI brings up a sticky subject. “These organizations need to be able to explain how they generate performance and the explainability of AI has been difficult,” Monk says.

Some of the newer AI entrants are not transparent about how they gather their information, with Almanza calling it a black box. Using ChatGPT as a simplistic example, he explains that users can ask a question, but the software does not explain where it got the answer. Fears about sourcing such information may be driving regulators, but regulating on fears “is hugely problematic… [it sets] a bad precedent.,” he says.

Dave , an independent financial futurist, says he understands regulators desire to regulate AI, whether it’s the SEC’s proposed rule or Europe’s AI act, but “the pace of innovation so far is outstripping the ability of regulators to absorb what’s going on, much less effectively regulated for the common good,” he says, later adding about the SEC’s , “my suspicion is this is very dead in the water.”

The EU’s rules take a risk-based approach, defining four levels of risk for AI systems, from minimal to unacceptable, assess AI systems before they are released to the market and establishes an AI governance structure at the European and national level.

Nadig says regulatory guidance is a better way to handle issues related to AI. Agencies such as the SEC and FINRA often issue standards, for example, pointing to the commission’s standards for broker-dealers under Regulation Best Interest. FINRA could also emphasize it will strictly enforce existing rules governing broker-dealers as well, if regulators are concerned about malfeasance using AI.

“If regulators and politicians were actually serious about this, as opposed to just being reactive, we would be addressing those root underlying concerns about things like, who owns your data? What is the actual legal requirement for you to give me a best price? What is your actual liability when you give me a bad investment? We have rules. If those need to be firmed up, let’s firm up those rules,” he says.

Jennifer Han, chief counsel and head of global regulatory affairs at the Managed Funds Association concurs that the SEC has not analyzed how existing rules address the concerns that the agency cites as justification for the proposed AI rule. She points to the SEC’s marketing rule The proposed rule “rejects the entire premise behind the marketing rule, which is that sophisticated investors are capable of understanding advisers’ disclosures,” Han says. 

How Regulations May Develop

Almanza says regulation may eventually begin to govern how and what data is gathered for use with AI tools, to get away from black box concerns. He says while the approach by some AI firms to scrape data off websites may be fine, but without knowing how they gather the information and what they accumulate, users will not know if the data is verifiable. Data received from exchanges, which has gone through regulatory oversight, is already better understood.

Almanza says there are a lot of merits to steering the industry to using more reliable sources, documenting how data is found and allowing audits of the information AI firms are selling.

“I could see unifying companies behind principles of how you’re going to approach deploying this technology,” he says.

If regulators can take a principles-based approach about transparency and fiduciary duty, rather than a rules-based approach to managing conflicts, the SEC could establish guidelines to allow innovation in an industry not known for pioneering ideas, Monk says.

“The way you get fired from a pension fund is you go and do something innovative that’s different from your peer group. If it works out well, you become the next David Swensen. If it works out poorly, you get fired, because these organizations weren’t designed to innovate,” Monk says.  

He says given how regulated investors are in the asset allocation field, it is probably OK for the SEC to be a step ahead in setting guidance for AI use.

“One of the ways you bring innovation into these investment organizations is you set some rules, you give them a safe space to be creative. And so, if the SEC is creating a safe space to innovate, I think it will help everybody. If they are creating a cumbersome set of rules that will thwart innovation, then I think it could harm,” he says.

Related Stories: 

How Investors Are Utilizing Artificial Intelligence

How to Invest in the Future of Artificial Intelligence

AI Poses Unique Risks to Investors, SEC Chair Says

Tags: , , , , , , , ,

«