Algorithmic Trading - Key Components for Real-Time Trading (Part II of II)

Algorithmic Trading Part 2 Editor's Note: This article is a continuation of last week's first installment on key components for real-time algorithmic trading. Be sure to read part one if you missed it last week. Remember to sign up for our April 16th discussion on IT in financial markets as well.

> Trading Strategies/Signal Generation

Figure 3Statistical trading strategies are predictions of future market behavior based upon the detection of a pattern within current and historical data.  The deduction of the strategy is normally performed by careful analysis of historical data.   The trading strategy is implemented as a pattern detection process that triggers buy or sell orders.



What is classically called Statistical Arbitrage is usually a simple form of this pattern detection that applies to very short periods of time.  When strategies are required to respond very rapidly it becomes necessary to perform the pattern detection and trigger the order automatically in real time.                       

As algorithmic trading becomes more complex, trading strategies have been and continue to be developed.  Each strategy primarily consists of the detection of one or more patterns and placing an order when these patterns are detected.  The detection of the patterns requires:

  • The calculation of aggregated values
  • A snapshot of the market
  • Calculations to be performed over windows of data
  • Variability in the coefficients
  • Multiple inter-related phases

Many patterns are based upon the comparison of current point data with some form of aggregated data.  In the very simple patterns this may mean comparison of current price with some set of mean prices or the comparison between a set of mean prices.  As the strategies have grown in complexity the number and complexity of aggregation has grown.  This will continue to be true and thus the ability to extend the calculation capability of any system is important.

Patterns are detected over a set of points.  Traditionally this was performed by collating a set of data and performing the calculations upon it which were then used to detect the pattern.  When processing in real time it is necessary to perform these calculations in a different manner.  If each data point was retained and then processed when a complete set had been collected this would mean that any system’s performance would be affected by the size of the sets.  Larger sets of data would require more memory and result in a large processing peak when the set was complete.  To ensure a real time system performs regardless of set size, each data point must be processed as it arrives.  Each step of the calculation is performed as its data point arrives.  This ensures that there is no latency at the point the pattern is detected.  The system requirements are also not affected by the size of the set to be operated on.  In a real time system the boundaries of the set of data to be processed from an infinite stream is called a window.  Window definitions vary between strategies, some require time based windows that process as many data points as arrive in the time frame.  Others require calculations to be relative to the number of data points, for example performing the calculation for a specific number of trades.

During the development of these strategies it quickly becomes apparent that various coefficients must be variable in real time.  Some of these require variation by a trader who may be monitoring the algorithmic system but many may require self modification.  In these cases other calculations adjust the core behavior of the strategy, often by adjusting window sizes on aggregates and coefficients in expressions.  For example, the volatility in a market may adjust the length of a window.

Many of the patterns to be detected are constructed from multiple phases, i.e. multiple patterns that are interdependent.  When a pattern is detected in the 1st phase detection of another pattern is started but may be stopped by a reversal in the 1st phase.

An algorithmic trading system is likely to have multiple strategies operational at any one time.  These may have been developed at different times, by different analysts and may be targeted at different market behavior.  To enable this, any algorithmic trading platform must provide the ability to modularize the system and develop different modules independently.  It must be possible to add a new strategy without knowledge or disruption of the other strategies within the system.  Although the deployment of strategies is likely to be when the system is non-operational, some systems may require 24x7 operation and in most cases the ability to stop or pause a strategy in extreme circumstances (controlling loss) is required.

Figure 4
Figure 4:
Risk Management,
P&L Example
Lastly, for high-volume real-time trading, the applications must also run in a predictable and deterministic manner.  Correct sequencing of low-latency computations being performed in parallel on today’s modern multi-processor hardware is often a formidable task for application developers, and the system must be inherently capable of delivering the same predictable results when the same dataset is run through the system multiple times.

> Risk Management and Profit & Loss

There is a different set of requirements that algorithmic trading places on risk management.  Whereas it may have been acceptable to share an organization’s exposure limits between trading desks, this has significant limitations in an algorithmic trading system.  There may be multiple strategies designed to operate under different market circumstances.  There may be times when some strategies are dormant while others are trading heavily.  In these cases a simple sharing of potential exposure between the strategies will result in a failure to use the full potential available.  Thus there is a need to move to active, real time risk management.

A real-time risk manager will control the market exposure based on limits set for the system as a whole and risk calculations defined by the business.  Because the risk calculations are performed in real time on up-to-date market data they can be very accurate allowing the potential trading limits to be utilized.  Also the risk manager can, where appropriate, use risk reduction strategies to trade out positions where market movements have adjusted an organization’s position rather than generated orders. 

The risk manager may also include any compliance limits that the enterprise may wish to impose.

By separating the risk management from the trading strategies the issue does not have to be considered in the development of each strategy.  Also the allocation of limits to a strategy is then directly related to the confidence in the strategy rather than broader factors.

Where there are many trading strategies, consideration has to be given to balancing the demands of each strategy at various times against the risk limits.  This may be performed in a number of ways; however consideration should be given to the level of transparency of a strategy’s efficiency.  It may be better to allow a strategy to operate up to its limits while the risk manager constrains how much of the strategy’s position is actually exposed to the market.  If this is the case the risk manager maintains real world positions while each strategy is aware only of its virtual position.

A major aspect of monitoring strategy performance is its profit and loss.  As automated strategies may trade at high rates it is possible for them to accumulate profit and loss rapidly.  Thus it becomes desirable to move to a system of real time profit and loss calculation.  This then enables the profit and loss of a strategy to be monitored in real time.  It also allows the strategy to use these calculations to adjust its own behavior.  For example, a strategy may take a similar approach to that used in gambling where the more profit made, the more stake is available to trade with.

Like risk management, it is useful to separate the overall performance of the system (the organization’s true profit and loss) from that of each strategy.  By maintaining real time potential profit and loss calculations in real time for each strategy in conjunction with the risk manager, it becomes possible to identify those strategies that did not achieve their potential due to overall system constraints and those that are underperforming.  This enables the balance between the strategies to be adjusted more accurately.

> Order Management

Figure 5
Figure 5:
Order Management,
Example

An order is passed from a trading strategy via the risk management system to be executed upon by one or more venues.  In many cases, but not always, where the trade is to be executed will be specified.  The trading strategy may indicate some bounds on how and when the trade is to be executed.  However, normally the decision as to how to execute the order and the management of the state or the order will be performed by an order management system.

The structural part of an order management system is fairly constant.  It needs to record and track order state matching fills to orders and communicate this information back to the risk management system.             

Execution strategies apply statistical mechanisms to place orders into the market.  This may include splitting orders into multiple exchange orders or amalgamating orders.  Orders may be delayed for short periods to gain more favorable market conditions.   Execution strategies are generally simpler and have a longer life than trading strategies.  However, there may still be multiple strategies that apply at different levels of risk or are used under different market conditions.  Where there is more than one execution strategy the Order Management System must route orders to the correct execution strategy.

Some venue variations are deferred to the exchange interfaces but some must be managed in the Order Management System.  For example, where trading is performed across venues and one venue supports a style of order (e.g. Good till Cancelled) and another doesn’t, the OMS may choose to emulate the behavior of one venue on another (e.g. re-submit the outstanding part of an order each morning) or may choose to limit the types of order that may be performed.

Issues with venue variations become more prevalent in algorithmic trading systems because they do not have human intervention and also they typically trade on more venues than human traders.

> Exchange Interfaces

One of the challenges with moving to algorithmic trading, especially in emerging markets and non-equities assets, is interfacing with execution venues.  Many of the venue interfaces are based on old protocols designed to support manual traders.  The protocols often do not have the identification and synchronization mechanisms needed to run an algorithmic trading system.  This means the design of the exchange interfaces must be handled carefully and is not as simple as just converting protocols.  

Some of the variations and challenges may include:

  • How some venues distribute order book snapshots and may not distribute every trade
  • Inability to obtain an accurate picture of a current order book because some venues do not consistently synchronize the order book snapshot and trade records
  • The way in which some venues return fill information along the same channel as orders, while other venues may include this information in the market data
  • Lack of some venues in assigning unique trade and message ids
  • Various venues offer various levels of security mechanisms

With the emergence of newer electronic venues (e.g. MTFs in Europe) the variations will grow.

Figure 6

Figure 6: Exchange Interface Example

Algorithmic trading typically involves a broader spread of venues.  A system is capable of working across far more venues than a human being, enabling cross venue arbitrage and cross asset arbitrage trading.

At this stage in the process any order must be shipped to the venue as rapidly as possible.  The execution strategies may be working on very small time tolerances.  Thus any latency introduced by exchange interfaces must be kept to a minimum.

> Monitoring and Control

An algorithmic trading system is likely to generate more orders than a manual trader ever could and in addition some of the strategies involved may have tight latency constraints.  This means it is impossible to monitor each trade generated by the system by a human being.

An algorithmic trading system may contain many trading and execution strategies with only a subset fully understood by any one individual.  Thus assessing if the behavior of the system as a whole is correct requires the involvement of multiple individuals.  With multiple strategies, new measures of business performance must be developed.  In addition each individual strategy may have its own measures of performance.

The strategies involved are likely to have parameters that control their operation.  Some of these may be alterable interactively, for example exposure limits, while others must vary at a faster rate than a human can control.  Thus aspects of the system may be self adjusting, for example responding to changes in volatility or liquidity.

These all leave the management of a trading system with the challenge of applying practical oversight.  To practically monitor and control these applications requires automated, real time risk, compliance and profit and loss systems capable not only of notifying correct personnel within an organization but tripping of circuit breakers to either liquidate positions, stop trading or both.

Conclusion

In summary, the solution to developing an algorithmic trading system is automation.  This requires the strategies created by quantitative analysts to be integrated into an execution engine – one that does not require human approval and limits human intervention to simply monitoring that data or pulling the plug.

Today’s trading environments must be able to support an increasing amount of high-volume automated trading in various forms.  Therefore, the ability to deal with high volumes of market data and make trading decisions quickly is now a requirement to remain in business for many market participants.

The ability to rapidly prototype, back test and then deploy algorithms is already important, but will become even more important as firms deploy algorithms against each other.  Time to market is paramount for firms either seeking alpha or those firms offering order execution services to their clients.

About the Author

Colin Clark, a 22-year-veteran of the financial services industry and the former founding CTO of Kaskad Technology, has responsibility for professional services, training and technical support for StreamBase’s growing customer base.

Prior to co-founding Kaskad, Clark founded Selero in 1999, serving as President and Chief Executive Officer, where he grew the company from an unrecognized start-up to an industry leader. In another previous role, Clark was responsible for technology and operations at Western Asset Management where he was ranked as one of America's top CIO's in the investment management community.

Earlier in his career Clark worked for New Era of Networks, Fidelity, Putnam, State Street Bank & Trust, Shearson Lehman, Drexel, and Kleinwort Benson. During his career Clark has participated in the design, engineering and implementation of some of the largest and most complex financial systems in the world, including NASD, NASDAQ, Boston Stock Exchange, Barclay's, The Paris Stock Exchange, Chase, Liberty/CEDEL, CSFB, Citibank, and Standard Charter Bank of South Africa.

Colin offers many varied perspectives of the financial services market from his past roles with software vendors, service providers and some of the world’s leading financial institutions. This experience has allowed him to develop a skill set that is ideal for ensuring the success of StreamBase’s current and future customers.

Colin attended Case Western Reserve University, Boston College and is a graduate of the Executive Management program from the Haas School of Business.

More by Colin Clark