Influx of Data Weighs On Legacy Technology, Data Systems
A Macquarie Group engineer tells asset managers and investors that outdated systems can subject their organizations to risks and challenges and make it harder to integrate artificial intelligence tools.

As asset managers and investors are inundated daily with an influx of data, Macquarie Group Principal Engineer Ranjit Singh told the 2026 AM Tech Day last week in Sydney that legacy systems are standing in the way.
Singh said legacy systems create a multitude of challenges that compound each other, making it “generally hard to solve.”
“The main problem we face is that institutions have their own legacy systems which have been there for a long time, and what happens is it creates a ceiling for what is possible to achieve,” Singh said.
Having fragmented data and processes means “that different systems are being owned by different teams, and there’s no one person who can create a singular, unified experience of what it should look like, which results in not being able to surface up the data in a consistent way and have a singular integration model.”
Singh said this can create security and compliance concerns when a third-party platform wants to access an institution.
“In general, I think the challenges are not insurmountable,” Singh said. “It’s just that we have to be deliberately putting the investment into one architecture’s singular ownership and data standardization, and that’s what we are looking at doing at Macquarie.”
He added that data fragmentation also has a “tangible” impact, particularly for platforms.
“It’s one of those problems which is manageable at a small scale, but once your platform grows, it can really constrain you,” he said. “The core of the issue is that advisers and portfolios sit across multiple different systems and multiple different databases. When the lower layers have to reach out, they end up spanning multiple different databases, which causes latencies, performance bottlenecks and inconsistencies in data.”
Singh said the problem occurs because every database is set up separately, and at scale issues arise.
“When you are running in a data flow pattern, your data flows are limited by the weakest link there. So, if you want to run real time, you potentially can’t, because some of those purposes which you’re running may be batch-based, and scalability has similar challenges,” he said. “Last but not least, you can’t really create any meaningful AI on top of data which is not consistent and fragmented.”
A version of this article originally appeared in our sister publication, Financial Standard, which, like CIO, is owned by ISS STOXX.
