A General-Purpose GPU Reservoir Computer

dc.contributor.authorKeith, Tūreiti
dc.date.accessioned2013-04-17T03:21:35Z
dc.date.available2013-04-17T03:21:35Z
dc.date.issued2013en
dc.description.abstractThe reservoir computer comprises a reservoir of possibly non-linear, possibly chaotic dynamics. By perturbing and taking outputs from this reservoir, its dynamics may be harnessed to compute complex problems at “the edge of chaos”. One of the first forms of reservoir computer, the Echo State Network (ESN), is a form of artificial neural network that builds its reservoir from a large and sparsely connected recurrent neural network (RNN). The ESN was initially introduced as an innovative solution to train RNNs which, up until that point, was a notoriously difficult task. The innovation of the ESN is that, rather than train the RNN weights, only the output is trained. If this output is assumed to be linear, then linear regression may be used. This work presents an effort to implement the Echo State Network, and an offline linear regression training method based on Tikhonov regularisation. This implementation targeted the general purpose graphics processing unit (GPU or GPGPU). The behaviour of the implementation was examined by comparing it with a central processing unit (CPU) implementation, and by assessing its performance against several studied learning problems. These assessments were performed using all 4 cores of the Intel i7-980 CPU and an Nvidia GTX480. When compared with a CPU implementation, the GPU ESN implementation demonstrated a speed-up starting from a reservoir size of between 512 and 1,024. A maximum speed-up of approximately 6 was observed at the largest reservoir size tested (2,048). The Tikhonov regularisation (TR) implementation was also compared with a CPU implementation. Unlike the ESN execution, the GPU TR implementation was largely slower than the CPU implementation. Speed-ups were observed at the largest reservoir and state history sizes, the largest of which was 2.6813. The learning behaviour of the GPU ESN was tested on three problems, a sinusoid, a Mackey-Glass time-series, and a multiple superimposed oscillator (MSO). The normalised root-mean squared errors of the predictors were compared. The best observed sinusoid predictor outperformed the best MSO predictor by 4 orders of magnitude. In turn, the best observed MSO predictor outperformed the best Mackey-Glass predictor by 2 orders of magnitude.en
dc.identifier.urihttp://hdl.handle.net/10092/7617
dc.identifier.urihttp://dx.doi.org/10.26021/1513
dc.language.isoen
dc.publisherUniversity of Canterbury. Department of Electrical & Computer Engineeringen
dc.relation.isreferencedbyNZCUen
dc.rightsCopyright Tūreiti Keithen
dc.rights.urihttps://canterbury.libguides.com/rights/thesesen
dc.subjectgraphics processing uniten
dc.subjectecho state networken
dc.subjectreservoir computeren
dc.subjecttikhonov regularisationen
dc.subjectlinear regressionen
dc.titleA General-Purpose GPU Reservoir Computeren
dc.typeTheses / Dissertations
thesis.degree.disciplineElectrical Engineering
thesis.degree.grantorUniversity of Canterburyen
thesis.degree.levelMastersen
thesis.degree.nameMaster of Engineeringen
uc.bibnumber1924163
uc.collegeFaculty of Engineeringen
Files
Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
thesis.pdf
Size:
1.38 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
Keith_Use_of_thesis_form.pdf
Size:
84.35 KB
Format:
Adobe Portable Document Format