Technology—specifically, artificial intelligence—is reshaping institutional investing for both asset managers and institutional asset owners, and Ashby Monk of the Stanford University Research Initiative on Long-Term Investing says the transformation is just in the first inning.
Institutions and their boards, CIOs and investment teams need to address questions of both portfolio governance and data governance. They must also, in some cases, formalize decisionmaking processes to accommodate the potentially expanding role of new and evolving technology.
“What scope does the board have for governing AI?” asks Monk, the executive and research director of the Stanford initiative. “There is so much change in decisionmaking coming … as organizations leapfrog from Excel to AI.”
In particular, he points to the need for investment organizations to ensure that they have detailed documentation for all the processes that people on their teams use to make decisions. Documentation is necessary to ensure that when rules-based technology processes or systems are part of the organization, they function within the governance and decisionmaking parameters set by the organization in its investment policy and elsewhere.
It may sound simple, but Monk says “existing AI doesn’t have a well-governed decision environment.”
Institutional investment organizations have processes—known to a CIO, the team and the board—for different activities they do regularly, such as signing an investment memo. But if AI is going to become part of that process, “everything is going to have to be defined and governed,” Monk says. “All the little pieces are artifacts or inputs of a well-governed system.”
Another important factor to consider when contemplating the inclusion of AI in different investment processes, he adds, is trust.
“In investing, human beings trust other people,” Monk says, going on to say that the intangible qualities that trust adds to human interactions—especially when making investment decisions—need to be identified and codified so that they can become part of a process in which AI is embedded.
Investments Growing
In 2025, venture capital investments in AI firms globally made up 61%—$258.7 billion—of all VC investment, more than doubling its share from 30% in 2022, according to a February policy brief from the Organization for Economic Co-operation and Development. Within all VC AI investments, funding to generative AI firms surged to 14% ($35.3 billion) in 2025 from about 2% ($2.8 billion) in 2022.
The OECD runs the OECD AI Policy Observatory to develop and provide global frameworks for responsible deployment and development of AI technologies. From March 30 through April 1, it will hold, virtually, the 2026 International Conference on AI in Work, Innovation, Productivity and Skills.
The conference’s agenda reflects some of the questions raised by Monk, including a session on the rise of agentic AI entitled “What happens when AI moves from following instructions to acting on our behalf?” The panel, scheduled for Monday, is described as a discussion of questions about how AI will reshape industries by “streamlining complex tasks and enhancing collaboration between humans and machines.”
In February, the OECD published “Due Diligence Guidance for Responsible AI” that offers businesses “an internationally agreed, government-backed tool to demonstrate that markets and societies can trust their AI systems,” noting that “risks throughout the AI value chain are continually evolving.”
The publication addresses the following topics:
- Data provision and data annotation;
- Dataset creation and curation;
- Developing, adapting or providing code for third-party use, including contributions to open-source libraries and software components for AI development; and
- Development of metrics and evaluation measures.
It also presents a framework “related to the provision of financial, logistical, administrative, and hardware inputs needed to support the development of the AI system.”
The guidance is backed by the OECD’s member countries, 17 partner governments and the EU and is intended to help organizations “navigate the complex terrain of AI risk management.”
It’s the Data
Monk and colleague Dane Rook, in episodes of their podcast “The Technologized Investor,” often talk to technology startup founders and others building AI tools for pension funds, endowments and sovereign funds about how innovations could disrupt global capital markets and about what is necessary for that to happen at scale. Rook is a research engineer at Stanford Long-Term Investing. They have identified challenges and opportunities relating to AI and data in many forms and across multiple asset classes:
- Data available to investors in private markets is fragmented and hard to govern, which is becoming a business constraint as AI and liquidity issues change the investment landscape, ; and
- AI can help institutions find insights in every document and dataset, but investment organizations need to prepare their and train their teams to leverage data at scale.
Prohibition Not an Option
Scott Miller, a senior consultant in Segal’s administration and technology consulting practice and a former executive director of the North Dakota Public Employees Retirement System, wrote in 2025 about plan governance and AI considerations for public pension funds in an article for the National Conference on Public Employee Retirement Systems. He noted that “many plans are delaying … incorporate[ing] AI guidance into their governance documents” and stated that is a potential problem, even if plans do not intend to adopt AI as tool.
“Working through the decision of whether and how to incorporate AI into the workplace seems to be a daunting task better left for another day,” Miller wrote. “However, there are many reasons to replace ‘delay’ with ‘immediate action,’ not the least of which is the fiduciary responsibility to [plan] members and beneficiaries.”
Employees in all kinds of businesses are using AI at work, even if it is against policy, Miller wrote, citing 2023 research from software provider Salesforce that found that “55% of surveyed employees had used unapproved generative AI tools at work. Even worse, 69% of those employees had never received training on how to use generative AI safely and ethically at work.”
For that reason, Miller advised that even plans that choose not to use AI at all should carefully develop polices specifically prohibiting its use and add IT restrictions preventing it across the organization and its vendors. The policy for vendors is important and should require them, according to Miller, to abide by the plan’s policy and “maintain the confidentiality and security of sensitive information.”
He offered a set of best practices for incorporating AI into a pension plan’s workplace and stressed the need to make sure everyone in the workforce understands the policy. In addition, the policy and governance related to an evolving tech like AI cannot be “set it and forget it.” Plans need to institutionalize efforts to keep the policy updated.
Quoting former President Teddy Roosevelt, Miller concluded: “In any moment of decision, the best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing.”
Tags: Artificial Intelligence, Ashby Monk, board governance, Dane Rook, Data Governance, fund governance, Investment Management, NCPERS, OECD, Segal
