Skip to main content

Infinite Data, Finite Space: How Market Data Volumes Are Driving a Rack Space Crisis in Trading

The capital market’s industry is entering an uncomfortable new reality: market data volumes are growing without limit, but the prime co-location space and power are not.    

For market makers, agency brokers, and quant shops alike, real estate inside top-tier data centers is as critical to a trading strategy as the code itself, regardless of the latency profile. Yet many firms are hitting a wall:  

  • More exchanges and trading venues—each bringing more feeds, higher data rates and bandwidth requirements 
  • Tick size reductions — multiplying messages per symbol and increasing message rates 
  • Volatility surges — creating unpredictable bursts that strain infrastructure 

The result? Waitlists for prime co-location stretch for years, power and cooling are tapped out, and operational teams are forced into suboptimal trade-offs. 

This isn’t a temporary capacity hiccup. It’s a structural bottleneck and it’s redefining how firms manage infinite data within a finite space. 

Why Rack Space Location Can Make or Break Trading Performance 

Not all rack space is created equal. For firms competing on microseconds and nanoseconds, where your servers are located and their capacity to scale is directly tied to execution quality, P&L, and market competitiveness. 

It’s not just about the amount of rack space available; it’s about proximity to the matching engine, power availability, latency equalization, and the physical limits of the world’s most sought-after co-location sites.

Location is Everything  

Take US Equities: NY3, NY4, and NY5 are latency-equalized, ensuring the shortest possible fiber path to the exchange matching engine. Secondary facilities like NY2 or NY7 are further away and not equalized – instantly adding latency that can mean the difference between winning and losing the trade. 

  • Even a single meter of fiber can add ~5 nanoseconds, which can mean the difference between being first at the post or left behind. Every delay reduces fill ratios, erodes profitability, and increases slippage.  
  • To push the speed advantage further, many low-latency participants use wireless links between sites, which can be up to 40% faster than fiber.  
  • The “wrong building” disadvantage can’t be engineered away with better networking.  

When microseconds translate directly to not only a firm’s competitiveness in the market, but also directly impacts fill ratios — every meter (and microsecond) truly counts. 

The Unsustainable Growth of Market Data 

At the same time, market data is growing faster than firms can expand. New venues, smaller tick sizes, and heavier quoting activity means:  

  • Message rates and bandwidth requirements grow each year 
  • Frequent data bursts max out sever processing capacity 
  • More processing means more power draw per cabinet 
  • More power means more cooling per watt consumed with each added workload 
  • More servers mean more operational complexity and higher cost 

Meaning, trading firms and data center operators simply cannot scale their infrastructure fast enough to keep up with data growth – forcing them to make trade-offs that affect their entire business.

The Cost Spiral of Expanding in Prime Co-location 

As demand for equalized co-location rises, securing additional cabinets and power in premium facilities is expensive, and the costs compound:  

  • Premium lease rates that rise with demand 
  • Escalating power costs as server density increases 
  • Cooling infrastructure strain that limits new deployments 
  • Increased points of failure as server sprawl grows  

New Datacenters Won’t Save You 

On paper, building a new facility might sound like the obvious fix. It’s rarely that straightforward.  

The most desirable co-location sites are in heavily concentrated areas, such as Northern New Jersey, where real estate is in high demand. Adding to these locations requires city approvals, specialized permits, and the physical square footage to build, all of which come with constraints and long lead times. 

Even when new expansions do get the green light, demand tends to outpace supply. Premium rack space fills up before construction is even complete, keeping waitlists long and pricing power firmly in the landlord’s hands.  

For firms that require real-time data processing to be near the matching engine, moving a latency-sensitive workload to a second-tier site to gain extra space often introduces more problems than it solves. 

When strategies or desks must split across multiple sites, the operational and regulatory complexities multiply. For example, a multi-strategy hedge fund might keep its most latency-sensitive market-making algos in NY4 but run mid-frequency quant models or analytics from NY2 because there isn’t room to host everything in the same facility. 

This separation can lead to inconsistent performance, extra data hops, and unnecessary latency — all of which chip away at the very edge that co-location provides. The reality is that more square footage alone won’t fix a structural imbalance between ever-growing data and the finite nature of prime datacenter space. 

Workarounds Are Already Hurting Performance 

Many firms have managed to defer tough choices about rack space and footprint, but it’s losing viability. As market data volumes surge, some trading teams experience increased latency, dropped packets, or costly outages precisely when speed matters most. Each resync event or missed packet means lost fills, missed opportunities, and reputational damage that’s hard to undo. 

Because there isn’t enough space in prime locations, teams must make suboptimal decisions about which workloads to keep versus those to be pushed to a less desirable site. Over time, these workarounds erode the very performance edge that high-frequency strategies and competitive market making rely on. 

Breaking the Bottleneck with Specialized Hardware  

The reality is that firms can’t outpace data growth by simply adding more general-purpose servers or waiting for new space to become available. The smarter path that many competitive firms are turning to is to offload the most resource-hungry market data workloads to specialized hardware. By shifting market data processing to FPGA cards, firms can reclaim rack space, reduce power consumption, while maintaining low latency performance.  

FPGA: A Practical Path to Do More with Less 

For firms squeezed by finite rack space and infinite data demands, specialized hardware has become a pragmatic way to break the trade-off. One approach is offloading market data processing and feed handling from CPU cores to field-programmable gate arrays, or FPGAs. 

Unlike CPUs, which are designed for sequential processing and are best suited for tasks with complex control logic or rapidly changing workloads, FPGAs can be programmed to run dedicated workloads in parallel, right at the hardware level. Meaning, they are best for fixed, repetitive, and high-throughput tasks like parsing market data feeds and signal processing. This allows them to process massive volumes of market data with low latency and low jitter, without maxing out cores, burning excess power, or creating bottlenecks that software stacks struggle to handle. FPGAs enable firms to reclaim precious rack space and maintain consistent performance, even when data volumes surge. 

Centralized vs. Embedded Market Data Processing — and the Full FPGA Alternative 

Embedded Feed Handlers  
(handlers run directly on the same server as the trading strategy) 

Pros:

  • Minimal latency, fewer hops in the data path 

Cons:

  • High CPU usage 
  • Redundant processing across desks 
  • Poor scalability as data volumes increase 
  • Firms often respond by throttling workloads or adding servers — increasing both rack space and power costs 

Centralized Ticker Plants 
(Market data is aggregated and normalized in a single system, then distributed to multiple teams) 

Pros:

  • Reduces duplication 
  • Centralizes control 
  • Simplifies maintenance

Cons:

  • Software in the critical path introduces latency jitter and bottlenecks 
  • Can struggle under market bursts, impacting deterministic strategies 

The Hybrid FPGA Alternative 

A fully hardware-based data path combines the efficiency of centralized processing with the ultra-low latency of embedded designs. By keeping feed ingestion, normalization, and fan-out entirely in FPGA hardware — with no software bottlenecks — firms can process extreme market bursts without dropped packets or “retransmit storms.” This method frees up CPU cores for what differentiates a desk: trading strategies, real-time risk checks, and other revenue-generating logic, while also allowing firms to continue using software for less latency sensitive feed processing.  

The Nexus Advantage 

Exegy Nexus is built around this hybrid FPGA philosophy: 

  • Full FPGA data path — no software in the critical feed-handling path 
  • Centralized, ultra-low latency distribution across strategies  
  • Dense FPGA processing in 1U or 2U appliances (up to 8 cards) 
  • 40%+ footprint reduction in prime co-lo facilities 
  • Future-proof scalability for new markets without adding cabinets 
  • Continue using software for less latency sensitive processing 

Nexus consolidates the most resource-hungry parts of the market data pipeline into dedicated hardware, enabling firms to keep their footprint tight while sustaining 1–2 microsecond performance in all market conditions. 

Firms no longer have to choose between speed, scale, and efficiency. 

The Benefits of Offload for Full Datacenters 

When market data volumes continue to grow, but your rack space remains finite, every cabinet saved preserves capacity for what drives revenue. By moving feed processing off CPUs and into specialized hardware, Exegy Nexus enables: 

Rack-Space Optimization  

By moving feed processing off of CPU-based servers to specialized hardware, firms can reduce the number of processing cores that are used to manage market data, freeing up space in existing racks for latency-sensitive workloads like trading applications, risk checks or real-time analytics. 

Lower Total Cost of Ownership 

Reduced power draw, fewer points of failure, and more flexibility infrastructure management results in cutting operational costs, lowering risk and less downtime. 

Future-Proof Scalability 

Offload doesn’t just solve today’s footprint crunch — it sets firms up to scale without adding rows of new cabinets every time market data volumes spike. New markets, new strategies, or rising volatility doesn’t have to mean an endless cycle of hardware sprawl. Instead, firms can grow efficiently, confident that their infrastructure won’t become the bottleneck. 

Conclusion: Turning a Constraint into an Advantage 

Limited rack space remains a structural bottleneck for the trading industry — one that won’t resolve itself as market data volumes continue to climb. Waiting for new co-location builds, squeezing more CPU cores, or kicking the problem down the road is no longer sustainable when the cost of doing nothing compounds with every market burst and every missed trade. 

The firms that thrive will be those that rethink their architecture now, adopting an innovative, hybrid approach that leverages the best of both software and FPGA-powered solutions.  

By offloading the most resource-hungry parts of the market data pipeline to hardware, firms can reduce physical footprints, lower operational costs, and keep their most latency-sensitive workloads exactly where they perform best. 

As the industry faces an era of infinite data and finite space, it’s time to rethink what’s possible. The best-positioned firms are those willing to revisit their market data strategies and adopt innovative architecture to stay ahead. 

Every cabinet saved in prime co-lo protects your ability to grow and trade at your best. 
See exactly how much space, power, and cost Nexus can save you — schedule your assessment today.