Header image

 

DEVELOPERS OF QUALITY SOFTWARE SINCE 1994

 

line decor

 

HOME  ::  

line decor

 

 
 

LogoPUBLICATIONS & TECHNICAL MEMOS



FINANCIAL MARKET ANALYSIS
(2012) Correlation And Prediction In The Stock Markets
(2011) Induced Correlation In Stock Market Data
(2010) Trendspotting In The Stock Market
(2009) Predicting Market Data With A Kalman Filter
(2008) Least_Squares Prediction Formulas for Non-Stationary Time-Series
(2008) Linear Estimation and the Kalman Filter
(2006) Harnessing the (Mis)Behavior of Markets
(2003) Data Smoothing By Vector Space Methods
(2001) Computerized Screening for Cup-With-Handle Patterns 2 - Trading Within the Cup
(2000) Market Data Prediction with an Adaptive Kalman Filter
(1998) Computerized Screening for Cup-With-Handle Patterns
(1996) Pattern Recognition in Time Series

SCHEDULING
(2006) The Tapeboard Problem and a New Scheduling Algorithm
(2006) Scheduling Algorithms For Concurrent Events

ENCRYPTION
(1995) Encryption Algorithms and Permutation Matrices

POLYMER CHEMISTRY
(2004) A New Technique for Studying Gelation in Polyesters

 


 

Correlation And Prediction In The Stock Markets
by Rick Martinelli
Copyright ©, Haiku Laboratories 2012

One of the first models of stock market data was introduced by Bachelier in 1900 [1].  His model assumes daily stock price changes form a white noise time series, i.e., a stationary, uncorrelated series with zero mean and constant variance.  As such, it was not a predictive model.  More sophisticated models, such as auto-regressive (AR) models utilize the correlations in weakly stationary series to provide predictive models [2], but stock data longer than a few days is not weakly stationary either [3,4].  Some of the non-stationary features in these series may be modeled by generalized AR models like ARCH [5] and GARCH [6], which incorporate a time-dependent variance and/or auto-covariance.   In this paper, the basic assumption is that one of the non-stationary properties enjoyed by many stocks is a high auto-correlation in price-changes, implying a longer than average trending period, during which linear prediction is more accurate.     (More)

Go To Top


 

Induced Correlation In Stock Market Data
by Rick Martinelli
Copyright ©, Haiku Laboratories 2011

Daily stock-market data is recorded for four prices: the open, the close, and the high and low prices of the day.  A glance at a stock chart shows that the four individual price series are very similar, that is, highly correlated.  More importantly, in many cases their price changes, or increments, are also highly correlated, both with each other, and with their own and the others’ past behavior.  For example, Figure 1 shows the open and close increments for one quarter of SP500 data (63 days) ending 09 Sep 2011.     (More)

Go To Top


 

Trendspotting In The Stock Market
by Rick Martinelli
Copyright ©, Haiku Laboratories 2010

Trends And Cycles

The vast majority of stock market data can be thought of as combinations of “trends” of various lengths and direction, and “cycles” of various frequencies and durations.  Consequently, many techniques have been developed to discern when a particular stock is trending or cycling.  The current article describes a simple approach to trend-spotting that is based on the idea that correlations in price differences translate into trends in prices. 

To see this, consider exactly what constitutes a trend at the smallest level. Assuming the minimum number of consecutive prices required to spot a trend is three, there are four possible arrangements of three prices, as shown in Figure 1. In the first case, prices show two successive increases and are in a short upward trend, a micro-trend upward.
  (More)

Go To Top


 

Predicting Market Data With A Kalman Filter
by Rick Martinelli and Neil Rhoads
Copyright ©, Haiku Laboratories 2009

The chart below shows daily opens for one year (252 days) of Ford Motor Co. (F). According to modern financial engineering principals, market data such as this is supposed to be a Brownian motion, which means that the daily price changes form a white-noise process.  A white-noise is a random process in which consecutive values are independent of each other (among other things), meaning a price increase is just as likely as a decrease each day.  However, in reality, it is not uncommon for a particular market item to have several consecutive down days, or up days, over a short time span.  During such spans the prices are said to be correlated.  The objective is to harness these correlations with a Kalman filter for prediction.   (More)

Go To Top


 

Harnessing the (mis)Behavior of Markets: Brownian Motion and Stock Prices
by Rick Martinelli, M.A. and Neil Rhoads, M.S
Copyright ©, Haiku Laboratories March 2006

In 1900 Louis Bachelier received a doctorate from the University of Paris with a dissertation entitled “Theorie de la Speculation”, an event that marked the first time a serious academic paper addressed the behavior of markets [1].  In his dissertation, Bachelier proposed that market prices could be modeled as something called Brownian motion.  Slowly, his ideas where adopted by the financial community and are now the foundation of modern financial engineering.  The idea of Brownian motion arose when a botanist named Robert Brown described the chaotic behavior of pollen grains suspended in a fluid and viewed under a microscope.  He reasoned (correctly) that their motion was due to large numbers of random molecular forces impinging on the grains.  Using similar reasoning, Bachelier assumed that market prices vary due to large numbers of random effects, such as the whims of individual traders, and hence may be modeled as Brownian motion.  (More)

Go To Top


 

Least_Squares Prediction Formulas for Non-Stationary Time-Series
by Rick Martinelli, M.A.
Copyright ©, Haiku Laboratories June 2008
Updated July 2011

The purpose of this memo is to derive some least-squares formulas to be used to predict financial market values. The problem addressed here may be stated as follows: Given n ordered pairs of numeric data {(x(k), y(k)) | k = 1,…,n}, find an expression for the least-squares estimate y*(n+1) of y(n+1) as a linear combination of the previous n data values y(1),y(2),…,y(n).  Here the y(k) represent market data values and the x(k) represent time values increasing with k.  Since large amountS of market data are reported at regular time intervals, the formulas presented here are derived for equal-interval data, commonly known as time-series. In this case, the usual least-squares formulas are much simplified by assuming x(k) = k.   (More)

Go To Top


 

Linear Estimation and the Kalman Filter
by Rick Martinelli, M.A.
Copyright ©, Haiku Laboratories June 2008

The purpose of this paper is to develop the equations of the linear Kalman filter for use in data analysis.  Our primary interest is the smoothing and prediction of financial market data, and the Kalman filter is one of the most powerful tools for this purpose. It is a recursive algorithm that predicts future values of an item based on the information in all its past values. It also is a least-squares procedure in that its prediction-error-variance is minimized at each stage.  Development of the equations proceeds in steps, starting with ordinary least-squares estimation, to the Gauss-Markov estimate, minimum variance estimation, recursive estimation and finally the Kalman filter.  (More)

Go To Top


 

Computerized Screening for Cup-With-Handle Patterns, Part 2 - Trading Within the Cup
by Rick Martinelli, M.A. & Barry Hyman MBA
Copyright ©, Haiku Laboratories March 2001

In our previous article, Cup-With-Handle and the Computerized Approach (TASC 10/98), we described an automated approach to identifying stocks that have set up the “cup-with-handle” structure with proper price and volume characteristics. The impetus for writing such an algorithm is that on any given day there may be new stocks that “break out” of a cup-with-handle pattern, but by the time investors are aware of them they could have already broken out to levels well above the pivot point (see Figure 1).  Identifying stocks that are set up correctly allows the trader to be watching such stocks before they break out, and makes it possible to buy these stocks just as they are breaking above the pivot (on sufficient volume). It is critical to buy a stock not more than a few percent above the pivot price because in many cases stocks tend to pull back to and test the pivot area before continuing their advance. If a tight stop-loss discipline is followed, the trader who chases a stock too far above the pivot point is likely to get stopped out on a subsequent pullback to, or just below, the pivot point.  (More)

Go To Top


 

Computerized Screening for Cup-With-Handle Patterns
by Rick Martinelli, M.A. & Barry Hyman MBA 
Copyright ©, Haiku Laboratories June 1998

In the book entitled "How to Make Money in Stocks", William O'Neil describes an approach to investing called the CANSLIM method. This method combines technical and fundamental analysis to identify some of the best stocks in a cycle. Each letter in the acronym CANSLIM stands for some characteristic of a stock or the market in which it is traded. For example, C stands for the stock's "current quarterly earnings" while M stands for "market direction". The letter I stands for "institutional sponsorship" which is an indication of money flow into or out of a stock, a major aspect of the CANSLIM method. (More)

Go To Top


 

Market Data Prediction With An Adaptive Kalman Filter
by Rick Martinelli, M.A.
Haiku Laboratories 1995
Copyright ©, December 1995

Prediction science has its foundations in mathematical statistics where, until recently, predictions involved a large number of calculations based on complicated mathematical models and had few practical applications.  In the 1950's, when large amounts of radar and other data were being collected, and just as computers were becoming available, the need for different prediction methods that were more suited to the new technologies became apparent.  New linear prediction algorithms were introduced by scientists and engineers to satisfy this need.  One of these has become known as the  Kalman Filter, named for its author, R.E. Kalman, who introduced it in 1960 (see reference [1]). (More)

Go To Top


 

Data Smoothing by Vector Space Methods
by Rick Martinelli, M.A.
Haiku Laboratories 2003
Copyright ©, June 2003

Suppose a time-varying process x(t) is measured at regular intervals, and it is known that the measurements are contaminated with noise.  If we let z(k) represent the measurement at the kth interval,  the situation may be represented by

(1)                                         z(k) = x(k) + y(k)    k = 1,2,...,N,

where {x(k)} is called the process, {z(k)} is called the data, {y(k)} are samples from a zero-mean random sequence with fixed variance σ2, and N is the number of measurements.  The data‑smoothing problem is to estimate the process {x(k)} from the data {z(k)}.  This is a centuries old problem, first addressed by the likes of Gauss and Legendre who formulated the first least-squares estimates. (More)

Go To Top


 

Pattern recognition In Time-Series
by Rick Martinelli, M.A.
Haiku Laboratories 1995
Copyright ©, July 1995

Pattern recognition is a general term that has been used to describe a variety of different, but related, phenomena. The ability of a camera and computer to discern a particular image in a visually noisy environment is a classic example from engineering. This article is concerned with patterns that appear in market data charts and that often precede other patterns of interest, such as a sustained upward trend in price. The motivation for this work came from the needs of market traders having large portfolios of stocks who must search each of their charts for patterns that are currently "setting up". ... The method described in this article allows a pattern to be specified as another chart-segment, of any length, provided it's shorter than the chart data being analyzed, and provides a statistically rigorous measure of the degree to which this segment resembles any other segment of the same length. (More)

Go To Top


 

Encryption Algorithms And Permutation Matrices
by Rick Martinelli, M.A.
Copyright ©, Haiku Laboratories June 2003

The electronic transmission of text-based information is widespread today and expected to increase with time.  Many situations arise in which some degree of privacy is desired for the transmitted message.  This memo describes a family of encryption algorithms that can be used to translate an Ascii text message into another Ascii text message of the same length, whose characters are  permutations of the originals.  These algorithms have the properties (More)

Go To Top


 

Scheduling Algorithms For Concurrent Events
by Rick Martinelli, M.A. and Neil Rhoads, M.S.
Copyright ©, Haiku Laboratories October 2006

This memo provides a rigorous foundation for the development of algorithms to optimally schedule concurrent events.  While the algorithms are generic and can be used for any type of events, a typical application is the assignment of guest reservations to rooms in a large hotel.  In the case of a hotel or condominium property, optimal scheduling means achieving maximum occupancy by never rejecting a reservation due to inefficient scheduling.  In what follows, we find the minimum number of rooms required to accommodate a given set of reservations, and we describe an algorithm for automatically making assignments in situations where guests can be freely assigned to any room.  We then consider the more realistic situation where certain reservations must be assigned to particular rooms and we provide a second algorithm for handling this more difficult case. (More)

Go To Top


 

The Tapeboard Problem and a New Scheduling Algorithm
by Rick Martinelli and Neil Rhoads
Copyright ©, Haiku Laboratories 2006

Large property management companies typically handle hundreds of rental units and thousands of bookings, sometimes spanning several years.  The tapeboard problem was brought to our attention by the IT manager of one such company.  Suppose a rental property has scheduled a large set of reservations, or bookings, into various of its rental units, each booking being defined by its start and end days relative to today, and where some of the guests have indicated they want specific units.  The general problem presented by the IT manager was: (More)

Go To Top


 

A New Technique for Studying Gelation in Polyesters
by Rick Martinelli, M.A.
Copyright ©, Haiku Laboratories April 2004

A new method for studying polymer network formation has been devised.  Crosslinking reactions are carried out in a recording viscometer, which provides accurate determination of incipient gel points and also serves as a high-speed stirrer.  The molten, nonstoichiometric mixtures are reacted to completion to eliminate the inaccuracies inherent in the determination of reaction extent and this, together with the use of esterification reactions with minimal side reactions, reduces many of the problems of previous methods.  The experimental results for the reactions of simple model compounds are in very close agreement with Flory’s network theory.  A system containing crosslinking reagents with unequally reactive groups has also been considered and the accuracy of the method enables the reactivity ratios of the different groups to be calculated. (More)

Go To Top