Spotlight Review Emerging Themes And Challenges In .

Transcription

Spotlight ReviewEmerging themesand challenges inalgorithmic tradingand machine learningApril 2020

About FMSBFICC Markets Standards Board Limited(‘FMSB’) is a private sector, market-ledorganisation created as a result of therecommendations in the Fair and EffectiveMarkets Review (‘FEMR’) Final Report in 2015.One of the central recommendations of FEMRwas that participants in the wholesale fixedincome, currencies and commodities (‘FICC’)markets should take more responsibility foridentifying and fixing poor market practice sothat they operate in the best interest of theirclients. Clear, practical guidance that deliverstransparent, fair and effective practiceswill rebuild sustained trust in wholesaleFICC markets.FMSB brings together people at the mostsenior levels from a broad cross-section ofglobal and domestic market participantsand end-users.In specialist, focused committees,subcommittees and working groups, industryexperts debate issues and develop FMSBStandards and Statements of Good Practice,and undertake Spotlight Reviews that aremade available to the global community ofFICC market participants and regulatoryauthorities. FMSB has issued 18 publicationssince 2016. As part of its analysis on theroot causes of market misconduct, FMSBis focusing on the challenges of newmarket structures.Spotlight ReviewsSpotlight Reviews encompass a broad rangeof publications used by FMSB to illuminateimportant emerging issues in FICC markets.Drawing on the insight of members andindustry experts, they provide a way forFMSB to surface nascent challenges marketparticipants face and may inform topics forfuture work. Spotlight Reviews will ofteninclude references to existing law, regulationand business practices. However, they donot set or define any new precedents orstandards of business practice applicable tomarket participants.Find out more about theFICC Markets StandardsBoard on our websitefmsb.comThe authorRupak Ghose is leading the FMSB SpotlightReviews on FICC market structure and theimpact of regulatory and technologicalchange on the fairness and effectivenessof wholesale FICC markets.Rupak has more than 15 years’ experiencein FICC market structure. He was Headof Corporate Strategy for ICAP/NEX forsix years providing advice on strategicdirection, market structure change, customer/competitor landscape, new business venturesand M&A. He was a trusted counsellor to thePLC Board on the dramatic transformationof the business from interdealer broker tofocused financial technology firm and theultimate sale of the businesses.Prior to this Rupak spent more than a decadeat Credit Suisse as an equity research analystwhere he was Top Three ranked on numerousoccasions by buy-side clients in InstitutionalInvestor and other surveys for his coverage ofasset managers, exchanges and other capitalmarkets focused companies.

Contents123456Executive summaryp.2Model riskEight factors to consider in the importanceof model risk managementp.4New asset classesExpansion of algorithmic market makingp.14Machine learningGrowth and challengesp.18Execution algorithmsGrowing use in FICC marketsp.24Best practicesBenefits of practitioner-led solutionsp.281

Executive summary1This Spotlight Review examines emerging themes in algorithmic tradingin FICC markets including model risk management in market making, theadoption of new machine learning techniques and the increased use ofexecution algorithms. The latter refers to algorithms that are offered to clientson an agency basis and used for order execution. This Spotlight Review aimsto generate further discussion on these topics and their relevance to futurestandards work by FMSB. Given the topical themes discussed, it will be ofinterest to a wide audience of participants in global wholesale FICC marketsbut it is specifically targeted at those senior managers with supervisoryresponsibility for algorithmic trading and those working on the application ofmachine learning in algorithmic trading.The use of algorithmic trading is not new, and over the past two decades it hasprofoundly changed the nature of trading and market structure in many FICCmarkets in terms of the increased velocity of trading, levels of internalisationand cross asset/venue trading patterns. Algorithmic trading methods andelectronic trading platforms have grown in a synergistic fashion.As the adoption of algorithmic trading continues to grow the focus ongovernance of algorithmic trading has increased significantly. Central banksand other regulators have issued guidelines on the controls for algorithmictrading, focusing primarily on the documentation and controls expected forthe development, testing and deployment of algorithms; and FMSB membersare developing a Statement of Good Practice expanding on this area. However,the application of model risk management to algorithmic trading is an areathat has received less attention. Nevertheless, the materiality of algorithmicmodel risks warrants a specialised practitioner-led approach.Historically, algorithmic trading has been most prominent in highly liquidmarkets, which have significant amounts of high-quality data. As theapplication of algorithms has expanded into less liquid products and withincreased utilisation of new machine learning techniques, the challengesof securing the quality and consistency of data needed are self-evident.Perhaps less obvious is the need to manage for increased model risk.Progress towards increasing use of self-learning machines will be incrementaland over an extended period. In the near term, machine learning in wholesaleFICC markets looks likely to remain restricted to specific minor functions onlyand as a relatively small part of the overall trading and reporting processwith tight controls in place. As in other businesses where machine learning isbeing adopted, there are nascent concerns about the conduct risks that mightcrystallise as a result of unintended design flaws, implementation and use.There is also increasing discussion within the industry about practices thatcan mitigate any market abuse or stability risks that may emerge.2

The use of execution algorithms is well established in cash equities marketsand are increasingly being adopted in foreign exchange. As they move intorates, credit and emerging markets, a key challenge will be sourcing marketdata, given the less continuous nature of these markets. Moreover, banksproviding execution algorithms to their clients need to be alert to any potentialconflicts of interest that may arise in how they provide such agency productsand how it relates to their core FICC market making businesses, which act on aprincipal basis.The increasing usage of algorithmic trading and the growing complexity ofmodels makes the topics and emerging themes discussed in this SpotlightReview extremely important. There are likely to be benefits from creatingglobal best practices for model risks which are not fully covered by existingregulations. FMSB has a role to play in areas like this, where there may beknowledge gaps between the private sector and regulators and where thereis scope for market participants to work together to address the issuesrather than in isolation. We propose that market practitioners, given theirdeep domain expertise, are in a better position to provide solutions that areproactive on managing risks.ContentsExecutivesummaryModel riskNew assetclassesMachinelearningExecutionalgorithmsBest practicesFor global wholesale FICC markets this Spotlight Reviewoutlines the: increased importance of model risk management in areas where algorithmictrading is being deployed; challenges faced as algorithmic market making expands into less liquidasset classes; adoption of new machine learning techniques in algorithmic market making; increased use of execution algorithms; and best practices, including the role of practitioner-led solutions.3

Model risk2Eight factors to consider in theimportance of model risk managementThere are a number of existing regulatoryrequirements and associated guidancefocusing on algorithmic trading both in Europeand beyond. Furthermore, guidance has beenpublished in some jurisdictions on the topic ofmodel risk. However, the application of modelrisk management to algorithmic trading is anarea that has received less attention. In thissection we outline eight factors to considerwhen looking at the importance of modelrisk management to areas where algorithmictrading is deployed.1. Significantprogress byregulators1. Significant progress byregulators2. The importance of modelrisk management3. Unique nature of model riskin algorithmic trading4. Crucial role of data inputs5. The difficulty of benchmarkingalgorithmic models6. The important role of testing andvalidation in model governance7. Scenario analysis and capturingunintended consequences8. The need for a robust secondline of defenceAlgorithmic trading is increasingly regulated in majorglobal financial centres. In the UK, both the FinancialConduct Authority (FCA)1 and the PrudentialRegulation Authority (PRA)2 have issued supervisoryguidelines relating to governance, algorithm approvalprocesses, testing and deployment, documentationof algorithms, and risk controls. Significant risks arisefrom the failure of systematic or operational controlsthat are intended to prevent or limit loss exposurefor highly automated transactions. System runawayissues have the potential to cause material losses ina short period of time. The lack of a robust softwaredevelopment lifecycle process was cited as the maincause of high-profile incidents in recent years suchas seen at Knight Capital.3 The other regulatoryfocus has been on conduct, i.e. the risk of algorithmicstrategies being coded, or learning, to disadvantageclients, abuse markets or cause disorderly markets.The current regulatory guidelines, which areprincipally focused on operational and conductrisks, may mitigate some risks from models throughthe consolidated approach to documentation,testing, controls and performance analysis at atrading algorithm level. For instance, a lack of modelrobustness may lead to unexpected P&L losses butthese would be bounded by a number of risk controlsat an algorithm level. These include continuousvalidation in the form of P&L checks coveringvolatility/skew of returns and significant financiallosses, position limits, price/spread limits. As a result,even though some models in algorithmic tradingstrategies may be highly complex, residual algorithmicmodel risk does not necessarily have to be high.4

2. Theimportanceof model riskmanagementAt the same time, as algorithmic trading expandsinto new and more complex areas, there may bea benefit to best practices relating to how modelsare deployed here. Model validation in algorithmsshould consider factors such as model complexity,appropriateness of model methodologies, inputdata quality, controls around model assumptionsand implementation. Execution controls, backtesting, sensitivity analysis, erroneous data handlingmeasures, and clear documentation are some of thekey mitigants.Risks can be greater in less liquid asset classeswhere pricing is less transparent, and the liquidity ofthe product should be considered when judgementsabout model risk are being made. At the same time,expectations around pricing precision should alsobe considered. For instance, in data-rich, heavilytraded instruments these expectations can beextremely high, while in data-light, infrequentlytraded instruments pricing precision may have alarger allowable error term.ContentsExecutivesummaryModel riskNew assetclassesMachinelearningExecutionalgorithmsBest practicesEarly supervisory guidance on model risk from theBoard of Governors of the Federal Reserve Systemin the paper SR 11-74 was focused across all typesof models, with reference to risk managementand balance sheet/capital calculations, given theinadequacies exposed by the 2007-2008 globalfinancial crisis. The paper defined a model asfollows:“ the term model refers to a quantitative method,system, or approach that applies statistical,economic, financial, or mathematical theories,techniques, and assumptions to process input datainto quantitative estimates. A model consists ofthree components: an information input component,which delivers assumptions and data to the model;a processing component, which transforms inputsinto estimates; and a reporting component, whichtranslates the estimates into useful businessinformation.”5

Model risk continued2It goes on to state that:2. Theimportanceof model riskmanagementcontinued“Model risk occurs primarily for two reasons: The model may have fundamental errors and mayproduce inaccurate outputs when viewed againstthe design objective and intended business uses the quality of model outputs depends on thequality of input data and assumptions. The model may be used incorrectly orinappropriately. Even a fundamentally soundmodel producing accurate outputs consistent withthe design objective of the model may exhibit highmodel risk if it is misapplied or misused.”3. Uniquenature ofmodel risk inalgorithmictradingThere are fundamental differences in algorithmicmodel risk when compared to more traditional riskor capital calculation models. Consequently, anyapproach leveraging existing model risk validationprocesses may need adjusting. The risk associatedwith misspecification in any single model may bemitigated by bounds placed on how any modeloutput data is used by the overall trading strategy.This combined with the dynamic feedback in a liveelectronic trading ecosystem means that residualmodel risk can be low in algorithmic trading.Consequently, less weighting can be placed onthe accuracy of a model’s estimates or predictionsand more on the implementation testing, backtesting and controls that minimise the conduct andoperational risks.The number of individual models deployed in analgorithmic trading system is much larger thantraditional areas so documentation and model riskratings, while still key, will need to be scalable tobe effective. Moreover, the depth and frequencyof model validation deployed should reflect thecomplexity and potential impact of individualmodels. There are often very simple modelassumptions made within an algorithm, for instancethe use of moving averages in price computation.For these ‘de minimis’ models, it is difficult or6

3. Uniquenature ofmodel risk inalgorithmictradingcontinued4. Crucial roleof data inputsimpossible to perform an assessment of ongoingperformance, especially determining the specificimpact on the overall P&L generated. This shouldbe considered when the governance framework isbeing applied.Other important components might not meet thedefinition of a model and so could typically beout of scope for model risk review. Interpretationsdiffer on an appropriate definition for a modelwithin algorithmic trading. One approach is thatan algorithmic trading model estimates or predictsan observable quantity, or that it involves somemathematical derivation of a non-observablequantity. Either of these approaches renders muchalgorithmic code as business logic and is thereforeout-of-scope of the model definition.ContentsExecutivesummaryModel riskNew assetclassesMachinelearningExecutionalgorithmsBest practicesThe amount, quality and consistency of datainputs represent crucial components of model riskmanagement. The risks here include erroneous orstale input data and broader constraints such assparse central limit order book transaction data,lack of depth and accuracy in other data sources,or single points of failure. Poor data quality andgovernance can create operational risks andconflicts of interest from inappropriate use ofprivate client data and incorrect or inadequateinterpretation of data sources.In many liquid markets, there is a dependence onCentral Limit Order Books (CLOBs) as referenceprices and when a lack of depth or market structureissues drive price changes on these platforms thatare not in line with fundamentals, there is a riskwith following them ‘blindly’ as a key data input. Ahigh-profile example in rates markets was the 15October 2014 US Treasury flash crash5 when despitean absence of material news flow there was a 37basis points (bps) intraday trading range in 10-yearUS Treasury yields. Two examples in recent years inforeign exchange markets are the 7 October 2016British sterling flash event6 and the 3 January 2019Japanese yen flash crash.7 In the former despitelimited news flow sterling depreciated 9% versus7

Model risk continued24. Crucial roleof data inputscontinuedthe US dollar in early Asian trading hours beforeretracing most of the move. The latter saw a 4%appreciation of the Japanese yen against the USdollar and much larger (circa 10%) moves againstother currencies such as the Australian dollar. It hadsimilarities to the sterling flash crash in terms oflimited news flow and occurring during light tradingin early Asian trading hours, but unlike other flashcrashes in foreign exchange, it impacted a widerrange of currency pairs than just the US dollarpairing.In less liquid markets such as credit, post traderegulatory data may not give an accurate picture ofliquidity given it tends to be focused on smaller sizetrades. Recent trade data may become irrelevantif market conditions change materially and a creditrating used as an input in pricing may becomeout of date relative to market conditions. A recentexample has been the wide discount at which bondexchange-traded funds (ETFs) have traded relativeto their net asset value reflecting the superiorliquidity of the former and the lag of the latter,where third party pricing services may not haveupdated their valuation models to reflect changingconditions in credit markets.The Federal Reserve stated in its model riskguidance that:“The data and other information used to developa model are of critical importance; there should berigorous assessment of data quality and relevance,and appropriate documentation. Developersshould be able to demonstrate that such data andinformation are suitable for the model and that theyare consistent with the theory behind the approachand with the chosen methodology. If data proxiesare used, they should be carefully identified, justified,and documented.”8

Another consistent focus of the 2011 FederalReserve guidance on model validation isbenchmarking.5. Thedifficulty ofbenchmarkingalgorithmicmodelsComparison with alternative theories andapproaches is a fundamental component of a soundmodelling process.Benchmarking is the comparisonof a given model’s inputs and outputs to estimatesfrom alternative internal or external data or models.”In other segments of financial services, such ascredit ratings data or securities valuation, thereare third-party industry data providers that allowfor independent benchmarking relative to peers.Algorithmic trading uses many publicly availabledata inputs and some comparisons here of inputsmay be possible. However, peer group comparisonsof the inner workings of algorithms and modellingassumptions are more difficult because of theproprietary nature of most algorithmic tradingmodels and how they process and use these datainputs. Where peer group benchmarking is notappropriate, performance monitoring is critical.ContentsExecutivesummaryModel riskNew assetclassesMachinelearningExecutionalgorithmsBest practices9

Model risk continued2106. Theimportantrole of testingand validationin modelgovernanceGiven the limited ability to conduct detailedbenchmarking against competing algorithmictrading models, it is important to have a rigorousmodel validation and performance monitoringprocess. With the drive for improved efficiencyacross the whole financial services sector it isnatural for there to be a drive to re-use as manycomponents of existing models as possible in newproducts and geographies. The question of whethera particular model is appropriate for use in a specificmarket, asset class or venue is not a new one, butlikely to be more common than ever in future.Core to model assessment is the testing ofmodel robustness and reliability to ensure safeand sound implementation. However, SR 11-7allows firms to take materiality of model risk intoconsideration when devising an approach to modelrisk management in order to meet supervisoryexpectations. Given the differences between pricingor risk and algorithmic trading models, differentmodel validation approaches may need to bedeveloped, where the control framework should beconsidered in deciding the model risk rating and anysubsequent validation and testing requirements.

7. Scenarioanalysis andcapturingunintendedconsequencesAs model risk increases in complexity, scenarioanalyses that stress test data inputs and theirimpact on algorithmic models become increasinglyimportant. This may include negative stresstesting, which seeks to determine the conditionsunder which the model assumptions break down.Where model risks are found, controls should beput in place. Limitations to data inputs can addto the uncertainty of results, and the real world isgenerally more unpredictable and complex thanmodels. Another unintended risk that is extremelyhard to capture is that of similarities, and resultinginterdependencies between, the algorithmic modelsof different firms.Capturing the unintended consequences ofalgorithms and modelling components notperforming in line with their intended aims isespecially important. The behaviour of individualalgorithms and modelling components may be asexpected, but the combination of models up to thetrading algorithm level may not be as expected.Unfortunately, it is very difficult to develop testingto demonstrate this, even with extremely clearguidelines on the aims of specific l riskNew assetclassesMachinelearningExecutionalgorithmsBest practices11

Model risk continued28. The need fora robust secondline of defenceGiven the high degree of technical expertiseneeded, there is a fine balance to be drawn betweenhaving validation by deep subject matter expertsin the first line of defence and an independent,unconflicted second line of defence with perhapslower technical expertise. It may be difficult tohave a second line of defence with the quantitativetrading expertise able fully to challenge the firstline, but it is crucial that the second line has enoughproduct and technical knowledge to validate andtest models properly. This will involve understandingthe mitigating controls and being able to driverelevant scenario analysis covering how the modelperforms in different conditions to minimise anymarket abuse and market stability risks.Many large banks have highly experienced anddedicated second line functions, but there remains aquestion about whether this is as embedded acrossall firms as, for instance, independent productvaluation and balance sheet validation functions are.There is a different challenge for how smaller firmswithout such resources can perform these tasks.12

Given the expected growth in machine learning and in automated trading inmarkets with less transparently priced products (which we discuss further inthis Spotlight Review), there are likely to be benefits from creating global bestpractices for model risks which are not fully covered by existing regulations.FMSB has a role to play in areas like this, where there may be knowledge gapsbetween the private sector and regulators and where there is scope for marketparticipants to work together to address the issues rather than in isolation.FMSB’s work focuses on areas that impact transparent, fair and effectivemarkets and supports open and competitive markets that deliver the rightoutcomes for end-users.ContentsExecutivesummaryModel riskNew assetclassesMachinelearningExecutionalgorithmsBest practices13

Expansion of algorithmic market makingNew asset classes3Traditionally, algorithmic trading has focused on near-continuous markets suchas cash equities, spot foreign exchange, futures and on-the-run US Treasurieswhich are extremely liquid and can provide huge amounts of historical marketdata. These include both centralised marketplaces and more fragmentedones where bilateral trading dominates, but all these markets have publiclyavailable reference prices generated by transactions on CLOBs. More recently,algorithmic market making has started to expand into other productcategories such as over-the-counter (OTC) derivatives and bonds (beyond onthe-run US Treasuries).These developments have been driven by a combination of factors:opportunities created by new technology, regulatory imperatives for moreelectronic trading and the need to reduce the costs of trading in a world wherereturns are under heavy pressure. The arrival of electronic and algorithmictrading in these new asset classes has brought significant benefits formarket participants.But algorithmic trading in these new product categories has also created newchallenges and risks, given the more limited transaction data available, morelimited transparency, the greater market concentration of counterparties, thelack of centralised marketplaces and the potentially longer holding period ofpositions. The use of algorithms in such markets can create different marketfairness and effectiveness risks to those in faster markets and potentially resultin higher tail risks. Three challenges are presented below.14

1. Increased difficulty of sourcing market dataHistorical market data is the fuel that powers algorithms, and most algorithmictrading models need detailed market data stretching back over a period thatincludes multiple types of market environment, including periods of stress.By definition, the amount and quality of historical market data is more limitedin less liquid markets. For instance, there may not be detailed tick level data inless liquid markets or transaction data may be delayed in terms of reporting(e.g. most bond transaction data in Europe is reported with a one-month timelag, with only a limited number of individual issues currently reported in realtime or with a 15 minute time lag).For many OTC derivatives, corporate and emerging market bonds, the levelof transaction data is too limited to drive algorithmic models. In these cases,pricing models can sometimes be built on data from related, more liquidmarkets, as a proxy for the less liquid instrument. There are opportunitiesto engineer ‘artificial’ data sets that have similar statistical properties to realmarket order and transaction data, in order to train algorithms. There are alsoopportunities, with machine learning algorithms, to use unstructured datafrom other sources in order to enrich historic price information (see below).Producing and maintaining such parallel, engineered or unstructured data itselfcarries serious and practical data governance challenges for firms attemptingto use such strategies.ContentsExecutivesummaryModel riskNew assetclassesMachinelearningExecutionalgorithmsBest practices15

New asset classes continued32. The role of public reference prices for hedgingThe challenges discussed above in terms of sourcing market data are linked toa second phenomenon – liquidity sourcing during extreme market moves suchas ‘flash crashes’ and the role of CLOBs.There has been a significant growth of both single-bank and multi-dealerdisclosed platforms, particularly in foreign exchange markets. Linked tothis and the growth of algorithmic trading, has been a rapid growth ininternalisation by large dealer banks where they avoid hedging into traditionalinterbank CLOBs or trading directly with other wholesale market participants.In normal markets liquidity providers try to avoid interbank platforms withpublic market data as much as possible as part of their efforts to minimisemarket impact and information leakage, which has often benefitted clientsover the same period through spread compression. That said, the existence ofCLOBs provide important places for hedging in more volatile markets. A recentand stark example of this was the unpegging of the Swiss franc by the SwissNational Bank in January 2015. A sudden, unprecedented move saw onesided flow, with some banks unable to internalise to reduce risk, and reachingrisk limits and liquidity rapidly disappearing on single-bank and multi-dealerdisclosed platforms. This in turn led to a material increase in such liquidityproviders’ activity on the interbank CLOBs.Most of the newer products where algorithmic market making is expanding,are less liquid and do not have the public liquidity on one or more CLOBs thatis available in foreign exchange. This inevitably increases the tail risk associatedwith liquidity shocks or sudden gapping in prices in these markets.The importance of public reference prices goes beyond the question ofliquidity in times of stress. It also directly affects the question of fairness.Established manipulative techniques, e.g. inappropriate use of pre-tradeinformation, spoofing and collusion (as discussed in ‘FMSB BehaviouralCluster Analysis – Misconduct Patterns in Financial Markets’8) are all easier toperpetrate in conditions where public reference prices are harder to establish,as may be the case in these less liquid products. A key goal of algorithmicgovernance needs to be ensuring that algorithms that go to market are fair interms of not creating market abuse and market stability risks.16

3. More market and concentration risks in less liquid productsAs algorithmic trading expands into markets that are less liquid, the associatedrisks will be greater, including the likelihood of ‘gap’ pricing driven byidiosyncratic events.ContentsExecutivesummaryHold times in liquid markets like foreign exchange are typically sub-seconds tominutes, but for other FICC markets these times may be days or even weeks.At the same time, it should be noted that as these other markets see moreelectronification, it is reasonable to assume that hold

The use of algorithmic trading is not new, and over the past two decades it has profoundly changed the nature of trading and market structure in many FICC markets in terms of the increased velocity of trading, levels of internalisation and cross asset/venue trading patterns. Algorithmic trading methods a