Skip to main content

Unlocking Performance Infrastructure: Making High-Performance Technology Work Across the Market

From outdated software stacks to overloaded servers and rising co-location costs, many firms are facing an uphill battle with trading infrastructure optimization. But throwing more hardware at the problem isn’t sustainable. It’s time to rethink what “high performance” really means and who today’s performance infrastructure is for.

Exegy recently sponsored a webinar on this topic that featured panelists Laurent de Barry, Exegy chief product officer; Anvar Karimson, Kepler Cheuvreux chief technology officer; Hank Hyatt, former Morgan Stanley managing director; and Minhaj Ahmed, HSBC product owner MSS equities. The discussion, moderated by Victor Anderson, content director at WatersTechnology, explored how firms beyond the traditional latency-focused crowd—including broker-dealers, agency desks, and mid-tier firms—can reduce complexity, shrink their data center footprint, and future-proof operations with modern high-performance trading infrastructure and more efficient execution architecture.

Defining “High Performance” in Modern Trading Infrastructure

High performance in trading infrastructure has never been just about speed. As Hyatt noted early in the discussion, every firm operates with a different latency profile: a clear definition of what “fast enough” means for their trading strategies, workflows, and market access requirements. Without understanding that profile, it’s impossible to design infrastructure that meaningfully improves execution quality and market access performance.

Across the panel, a consensus emerged around five interrelated criteria that determine whether a trading system can be considered high performance:

  1. Latency: how quickly the data moves between endpoints
  2. Throughput: how much data the system can process
  3. Determinism: how consistently it performs across bursts and volatility
  4. Resilience: how well it adapts to unpredictable market conditions
  5. Cost efficiency: how well it scales without exponential resource growth 

But, as de Barry emphasized, these aren’t five independent pillars; they are deeply intertwined. For one, throughput, latency, and determinism operate as a system. When throughput is constrained, queues build, which increases latency. And inconsistent processing directly leads to jitter and unpredictable behavior.

This interdependence is why firms are hitting the limits of legacy architectures. Even modest increases in market data volume can tip older systems into queues, spikes, and tail latency behavior that directly impacts execution quality.

It’s precisely why high-performance trading infrastructure increasingly depends on centralized, hardware-accelerated processing, particularly field-programmable gate array (FPGA)-based processing. By removing unnecessary steps, reducing server sprawl, and eliminating bottlenecks at the ingestion layer, FPGAs don’t just reduce latency; they also increase determinism, absorb market bursts, and dramatically lower operational overhead. 

The Current State of Trading Infrastructure

High-performance technology was once the exclusive domain of ultra-low latency firms, but the landscape has shifted dramatically. As market data volumes surge and co-location power and space become increasingly constrained, the pressure to optimize trading infrastructure has become universal. These are no longer just the problems of the “fastest” firms; these are structural limitations affecting every part of the market—from market makers to agency brokers to mid-tier firms—all of whom rely on execution engines and market access gateways built on legacy components not designed for today’s load.

Scale Limits

Legacy architectures, especially those built around software-based feed handlers, are showing their age. As Exegy CEO David Taylor outlined in his Design Patterns for Market Data series, software feed handlers were never designed for today’s volatility, burstiness, or scale. Horizontal scaling looks simple on paper, but in practice, it leads to core exhaustion, workload imbalance, unpredictable tail latencies, and massive overprovisioning—all of which compound operational cost. In many cases, firms are dedicating half of a server’s computing power just to keep embedded feed handling afloat.

And that’s only part of the problem. Many firms are also wrestling with architectural decisions made a decade ago. The resulting technical debt is not accidental; it’s structural. 

As de Barry noted during the discussion: “At any point where you introduce technology, you will have technical debt. It’s about [having] the discipline and appetite to constantly address this so your stack doesn’t diverge more and more.”

Fragmentation

Technical debt grows fastest in organizations where multiple teams have built their own infrastructure stacks around different performance needs. What likely began as a pragmatic approach to give each desk autonomy has evolved into a network of duplicated pipelines, inconsistent APIs, bespoke monitoring tools, and fragile integrations. Beyond being inefficient, fragmentation actively increases the risk of outages, slows incident response, and increases the day-to-day operational burden on engineering teams.

Technical Debt

Lastly, as both Bain and McKinsey have shown, rising technology spend doesn’t correlate with improved productivity. Firms are allocating 60-80% of their technology budgets to simply “run the business.” With so much spend locked into maintaining fragmented systems, very little remains for true modernization.

The result: more systems to maintain, more integration points to manage, and more exchange-driven change (EDC) churn to absorb. Not to mention, fewer resources are available to address the architectural issues that created these problems in the first place.

Why Architecture (Not Budget) Determines Trading Infrastructure Performance

For many firms, trading infrastructure performance has become less a question about budget and more of an architectural one. Much of today’s technology spend is absorbed by maintaining fragmented systems: legacy stacks, duplicated infrastructure, and the accumulated technical debt of decisions made years ago.

As de Barry emphasized during the discussion, the real cost driver is business-as-usual complexity. Simply maintaining existing systems becomes more expensive over time as power, space, and operational overhead rise steadily. “Paying to continue with business as usual isn’t a viable option,” he noted in the session. “Costs continue to rise—roughly 10% every 18 months.”

That rise is amplified by architectural inefficiencies. Karimson underscored a core principle: eliminating everything that doesn’t directly support the intended function. High-performance systems deliver outsized value when they consolidate workloads, minimize redundant code, and reduce total server footprint. As he put it in the discussion, “any code is a liability,” and minimizing infrastructure code directly reduces the maintenance burden.

But reducing complexity doesn’t mean abandoning flexibility. Both de Barry and Karimson highlighted the importance of modular, adaptable architectures—systems that can evolve incrementally without forcing disruptive redesigns. This discipline is what enables firms to move from reactive upgrades to long-term, deliberate modernization.

For many institutions, partnering with the right market data vendor can accelerate that shift. A strong technology partner can reduce operational load, simplify integration across desks, and provide the architectural foundation to absorb market-driven changes without escalating costs. The goal is not to spend more; it’s to spend smarter, redirecting effort and budget from maintenance to innovation.

Where to Begin: Fixing the Trading Infrastructure Bottleneck First

When firms begin modernizing their trading infrastructure, the instinct is often to focus on visible components, such as execution engines, smart order routers, and post-trade workflows. But as the panel emphasized, the greatest performance and scalability gains come from addressing the true bottlenecks at the foundation of the architecture.

For most firms, that bottleneck is market data.

Hyatt noted that market data is “where everything starts.” It dictates how quickly firms can react to changing conditions, how deterministically systems behave under bursty traffic, and how much headroom exists for execution logic or analytics. Improving performance here has an outsized impact because every downstream function, from the trading engine to the order-routing workflow, depends on the stability and throughput of the ingestion layer.

The panel aligned on a clear order of operations:

  1. Market data processing: Starting here to reduce latency, eliminate queues, and improve determinism has the highest impact.
  2. Exchange connectivity: Once ingestion is stabilized, optimizing order-path latency and reliability has the next greatest effect on execution quality.
  3. Client connectivity: Particularly for agency desks, ensuring deterministic intake of order flow reduces jitter and simplifies downstream routing and compliance logic.

These aren’t sequential priorities so much as cascading dependencies. Market data shapes the workload, which shapes execution behavior, which shapes client-facing performance. Modernization begins at the top of that chain—not at the edges.

And critically, modernizing trading infrastructure is not a one-time event. As Anderson noted, “high-performance infrastructure isn’t a one-and-done proposition.” New venues, new EDC cycles, new analytics workloads, and new data volumes continually put pressure on the pipeline. Firms must carve out dedicated capacity not just to innovate but also to retire legacy components, remove unnecessary code paths, and prevent technical debt from accumulating faster than can be addressed.

The Architectural Shift Ahead: The Future of High-Performance Trading Infrastructure

The panel’s final takeaway was clear: Firms can no longer treat high-performance technology as something reserved for the fastest desks. As de Barry emphasized, ultra-low latency technologies solve far more than speed. They address the very performance pillars that most firms are now struggling with: throughput limits, determinism failures, burst-handling capacity, footprint constraints, and rising operational overhead.

In other words, the technologies once associated solely with nanosecond trading are now the most practical ways to simplify and scale modern trading infrastructure. The problem has changed; the tools that solve it haven’t.

That shift is exactly what shaped Exegy Nexus™.

Nexus brings the benefits of hardware acceleration—predictable performance, consistency under load, and significant footprint and cost reduction—into a unified architecture that serves every desk, from market making to agency execution to enterprise analytics. It is the first platform designed to handle the full spectrum of latency profiles without forcing firms to maintain multiple stacks or overprovision compute simply to keep pace with market data volumes or the requirements of modern electronic execution.

As trading systems grow more complex and data volumes continue to rise, the firms that win will be the ones that modernize their architecture, not just their hardware. The future of trading infrastructure belongs to platforms that unify performance, simplify operations, and scale intelligently across the entire business—and Nexus was built precisely for that future.