May 3, 2017

# ConnorsRSI Strategy: Sensitivity Analysis

In Simple ConnorsRSI Strategy on S&P500 Stocks I showed a ConnorsRSI strategy on S&P500 stocks. In ConnorsRSI Strategy: Optimization Selection, I narrowed down the optimization to three potential variations that one could consider trading. This post will explore Sensitivity Analysis (also known as: Parameter Sensitivity) to help guide us on what to expect from each variation.

## Definition

This is my definition on Parameter Sensitivity.  This is not the formal definition.

Any strategy has various inputs that one may have optimized on. The idea is to vary a set of these inputs in “small” amounts to determine how much the strategy results change. Does changing a parameter slightly have a large change in the results? The focus should be on the statistics that one used to select the variations, which in my case were Compounded Annual Return (CAR) and Maximum Drawdown (MDD).

What I do is randomly vary the inputs by 10-20% around the parameter value. Why 10-20%? I think this is a reasonable amount of noise around most parameters. Each person must determine how much noise they will test. The more noise you add, the more I would expect the strategy to deteriorate. If it did not, then that would imply that the rule is not adding anything to the strategy and should not be there to begin with. Picking too small of amount for noise can mislead one into thinking all is good.

Once I have run the tests with the added noise, I can then look at the results and see if my selected variation is sitting on peak. Meaning does small changes in the parameters “drastically” change the CAR and MDD. Each person will need to define what “drastically” is.

## One Variable

When the strategy only has a few variables it is possible simply to test them all out. Take this simple strategy on the SPY. Buy when the close is greater than 220 day moving average. Sell when it is below it. Say we originally came to this number by doing an optimization in steps of 20 from 140 to 300. Here are the results all lengths of the moving average from 50 to 100.

You can see peaks at 200, 220 and 240. The original test in steps of 20 would lead one to think all was OK. But look at the red circle at 220. It is sitting on a peak. At 3% change in the 220 value gets you to a trough with a very different return. This sitting on a peak is what we are trying to avoid.

## Two Variables

One can also produce a chart the CAR with two variables but it is much harder to read. Here are the results of Buying SPY when RSI(2) is less than X and exiting when RSI(2) > Y.

It is much harder to see if a pair of values is near a steep peak or not.

## More than Two Variables

### Highest CAR

How do I handle more variables? Using Excel of course.  For the Connors RSI strategy variation with the highest CAR, run 934, I do the following

• 39 week high: Randomly choose value between 34 and 44. Even though I did not optimize on this value, I want to test the sensitivity of it
• The weekly high happened in the last 25 days: Randomly choose values between 20 and 30.
• ConnorsRSI less than 20.0: Randomly choose values 20% higher and lower. These need not be integer values. For example the value can be, 21.132
• Enter on 1.5% limit move down: Randomly choose values 20% higher and lower.
• Exit when ConnorsRSI is above 65. Randomly choose values 20% higher and lower.

Given the above, I pick my new random values in the given range for each parameter and run a backtest. I do this 1000 times to generate a large set of tests with random noise in each the parameters. The table below shows the range and average of each the parameters after the 1000 runs.

We can see that the average value is very close to equal to the base value, which is what we want. We also see that the min and max values are what we expect.

From these 1000 runs, I compute the following for CAR and MDD: Average, standard deviation, # standard deviations average CAR is from our base run, and # standard deviations average MDD is from our base run.

The average CAR of our 1000 runs is 20.87 and average MDD is 21.84 with a standard deviation of 2.40 and 1.96 respectively. How does this compare to the base variation with no noise? Our base run had a CAR of 26.63 which is 2.40 standard deviations above the average. Ideally I like to see this number under 1 standard deviation. Between 1 and 2 is a grey area. Above 2, then I believe is variation is on a pick and should be discarded. Now the MDD is under the average which is a good sign. This is why one should not gravitate to the highest value. You are likely sitting on or near a peak.

### Ulcer Index

This how the parameters will be varied for the Ulcer Index variation, run 673, I do the following

• 39 week high: Randomly choose value between 34 and 44.
• The weekly high happened in the last 20 days: Randomly choose values between 15 and 25.
• ConnorsRSI less than 20: Randomly choose values 20% higher and lower. These need not be integer values.
• Enter on 1.5% limit move down: Randomly choose values 20% higher and lower.
• Exit when ConnorsRSI is above 60. Randomly choose values 20% higher and lower.

Results after 1000 runs.

Here, the base CAR is 1.06 standard deviations above the average. Barely into my grey zone. While the MDD is about the same. One can see that adding noise had no large impact on the results.

### Histogram Method

This how the parameters will be varied for the Ulcer Index variation, run 678, I do the following

• 39 week high: Randomly choose value between 34 and 44.
• The weekly high happened in the last 20 days: Randomly choose values between 15 and 25.
• ConnorsRSI less than 22.5: Randomly choose values 20% higher and lower. These need not be integer values. For example the value can be, 21.132
• Enter on 1.5% limit move down: Randomly choose values 20% higher and lower.
• Exit when ConnorsRSI is above 60. Randomly choose values 20% higher and lower.

Results after 1000 runs.

The average CAR of the Histogram Method is higher than the Ulcer Index Method CAR but the standard deviation is lower. The base CAR is now only .31 standard deviations above the average of the 1000 runs. This is good, definitely not on a peak. The MDD is .88 standard deviations. Still under one. This has a chance of performing well going forward.

## AmiBroker Code

I will be providing the code to do the Parameter Sensitivity in AmiBroker. The code is not meant for beginners. The code does the analysis on a simple RSI strategy. To get the code fill in the form below.

Fill the form below to get the spreadsheet with the results for the runs and the calculations. You can then apply these to any other columns that you find important.

## Final Thoughts

As we can see, using a parameter sensitivity analysis can help us determine if we have picked a variation on a peak. Does not being on or near a peak mean that our strategy will work going forward? No it does not. We are trying to put the odds in our favor by avoiding those peaks. The only ‘real’ test of a strategy is what it does in real trading. No amount of testing or analysis will give us that answer.

Backtesting platform used: AmiBroker. Data provider: Norgate Data

 First NameLast NameEmail *

Jack Brennan - May 3, 2017 Reply

Cesar,

What software, if any, are you using to generate the 1000 runs ? Monte Carlo or bootstrap or Excel add-in ?

Thanks

Jack B.

Marius - May 4, 2017 Reply

Cesar,

Please send me the Amibroker code. Thanks for an informative website.

Thanks,

Marius

Cesar Alvarez - May 4, 2017 Reply

To get the code, fill in the form on the page. You get the code and spreadsheet.

Ola - May 7, 2017 Reply

Hi Cesar,

thanks for an informative post. I’m also wondering if it might be worth adding noise to the price data as a robustness test?
The thought being that if performance drops off quickly with added noise that could be an indication that the model is overfitted to the noise in the data series. Your thoughts?
Cheers,
Ola

Michael Barrow - June 27, 2017 Reply

Cesar:

Great article – thank you! I agree that OOS testing is not that great, and I never use it, pretty much for the reasons you articulated so well.

Two related things I have been doing lately in my strategy robustness testing are:

1) If I am developing strategies for 15 minute bars, I always test them on 14 min and 16 min bars as well. For daily bars with trading equities, I use 384 min bars as my standard (getting out of the market 6 minutes before the real close for equities), and I also test on 378 min and 390 min bars. There shouldn’t be too much of a drop-off in performance when slightly altering the bar length. Otherwise, it’s too curve-fit. Usually I test the first part of my concept on 15/14/16 or 384/378/390 immediately, and if it passes, I continue to tweak and add additional rules for stops, filtering, exits, etc. If not, I toss the concept right away and save a lot of wasted development time that I can pour into other ideas.

2) I do the same kind of thing with ATR. Since I use a simple 20-bar ATR as the underlying basis for my position sizing calculations, I add an additional input parameter in TradeStation as an ATR Multiplier. I optimize/test that from 0.9 to 1.1 by 0.05 and calculate the ratio between the one with the largest profit value and the smallest. They shouldn’t vary too much. How much is “too much”? Generally, I find that robust strategies come in at or below 1.5ish. When I do this, I also look to make sure that the nearest neighbors to 1.0 (0.95 and 1.05) don’t have too much of a difference in profitability. If their ratio is less than 0.85, I toss. Ruthlessly.

Like you, I have done these same types of simple parameter sensitivity/robustness tests on a number of other key indicators and inputs in my strategies. It is helpful to test them, but I have found that if I just cover the above two (Bar Interval and ATR Multiplier), I can pretty much weed out most of the weeds with a minimal amount of time and effort. The one exception that I will still also test is if an indicator requires a number of lookback bars as an input parameter. For these, I do the same thing as what I do with ATR where I test several values above and below my chosen one and calculate the ratio of the highest to lowest profit within that range, and if it’s too high, boom – gone.

You are onto something very important in strategy development here, and it’s nice that we can figure out pragmatic, simple ways to execute these ideas, without having to resort to physics PhD rocket-sciencery.

My trading results have improved dramatically since I started doing this.

Regards,
Michael

Cesar Alvarez - June 27, 2017 Reply

This are good ways to test the robustness of your strategy. I find that each person needs to find what works for them You do touch on a point tht I keep meaning to write about. I often find that my first stab at a strategy is near the final version of it. If I start to work to hard to make it work, it is not a good sign.