Friday 25 September 2015

Runge-Kutta Methods

As stated in my previous post I have been focusing on getting some meaningful features as possible inputs to my machine learning based trading system, and one of the possible ideas that has caught my attention is using Runge-Kutta methods to project ( otherwise known as "guessing" ) future price evolution. I have used this sort of approach before in the construction of my perfect oscillator ( links here, here, here and here ) but I deem this approach unsuitable for the type of "guessing" I want to do for my mfe-mae indicator as I need to "guess" rolling maximum highs and minimum lows, which quite often are actually flat for consecutive price bars. In fact, what I want to do is predict/guess upper and lower bounds for future price evolution, which might actually turn out to be a more tractable problem than predicting price itself.

The above wiki link to Runge-Kutta methods is a pretty dense mathematical read and readers may be wondering how approximation of solutions to ordinary differential equations can possibly relate to my stated aim, however the following links visualise Runge-Kutta in an accessible way:
Readers should hopefully see that what Runge-Kutta basically does is compute a "future" position given a starting value and a function to calculate slopes. When it comes to prices our starting value can be taken to be the most recent price/OHLC/bid/offer/tick in the time series, and whilst I don't possess a mathematical function to exactly calculate future evolution of price slopes, I can use my Savitzky Golay filter convolution code to approximate these slopes. The final icing on this cake is the weighting scheme for the Runge-Kutta method, as shown in the buttersblog link above, i.e.
 y_{n+1} = y_n + {\color{magenta}b_1}k_1 + {\color{magenta}b_2}k_2 + {\color{magenta}b_3}k_3 + {\color{magenta}b_4}k_4 + \cdots + {\color{magenta}b_s}k_s
which is just linear regression on the k slope values with the most recent price set as the intercept term! Hopefully this will be a useful way to generate features for my conditional restricted boltzmann machine, and if I use regularized linear regression I might finally be able to use my particle swarm optimisation code.

More in due course.

No comments: