One of the most difficult parts of living with epilepsy is that seizures are often unpredictable and occur without warning. For decades, the epilepsy research community has dreamed of developing technology that could warn individuals that a seizure was likely so that they could take action to prevent the seizure or mitigate its effects.
A few years ago, NeuroVista developed the first seizure-forecasting device to prove successful in humans (Cook et al., 2013). The device, which continuously monitored brain activity from chronically implanted intracranial electrodes, was very successful in some patients but not in others. NeuroVista was not able to attract sufficient investment to flourish and folded in 2010. However, the company’s success and data have galvanized the field of seizure forecasting and inspired engineers to develop next generation seizure prediction devices and algorithms that will be more accurate and less invasive.
Examples of this are two Kaggle Seizure Predicition competitions. The first, in 2014, was based in large part on data from the NeuroVista device implanted in dogs. The second, in 2016, was based entirely on NeuroVista data from three patients. The winning algorithms from the two competitions achieved about the same degree of success, an area under the curve score of about 0.80 (with 1 being perfect accuracy and 0.5 being chance performance). This level of performance is a bit discouraging as it is not clear if it is accurate enough to be useful to patients. Moreover, it is troubling that despite tremendous advances in the machine learning tools since 2014 (e.g., well-developed software for deep learning and extreme gradient boosting), seizure prediction accuracy did not improve.
I recently attended an epilepsy research conference, where the organizer of the 2016 competition, Levin Kuhlmann, gave an excellent summary of the competition and its results. In particular, he revealed some details of the data that made the competition unrealistically difficult, which means the competition is surely underestimating to some degree our current ability to forecast seizures.
Specifically, the limitations were:
- Some data were initially mislabeled as both interictal and preictal. Eventually the labels were corrected but contestants then had less time to work with the correct labels than originally intended.
- The data were chosen from the three patients in the NeuroVista trial (Patients 3, 9, & 10) who had the most unpredictable seizures (according to the original NeuroVista algorithm).
- Some competition data were acquired between 30 and 100 days post-device implantion. It appears that it takes about 100 days for the data from intracranial electrodes to stabilize after implantation (Ung et al., 2016). Consequently, the data is significantly non-stationary during this time and this non-stationarity degrades competition performance since the test data were all acquired after the training data.
- The time of data acquisition was not available to contestants. It is clear that seizures follow circadian and even multidien rhythms in some patients. Levin has done some nice analyses showing that including time of day as a feature boosts prediction performance.
Levin is going to publish these details soon, but I wanted to post this brief summary to help make it clear to others interested in the Kaggle data and seizure prediction more generally that, on average, seizure prediction should be significantly more accurate than what this contest achieved. Indeed, there is currently significant energy going into developing next generation seizure forecasting devices that will be less invasive than NeuroVista’s device. For example, the Epilepsy Foundation is about to fund a $3 million project over three years to develop non-invasive (e.g., wrist worn) seizure forecasting devices.
Cook, Mark J., Terence J. O’Brien, Samuel F. Berkovic, Michael Murphy, Andrew Morokoff, Gavin Fabinyi, Wendyl D’Souza et al. “Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study.” The Lancet Neurology 12, no. 6 (2013): 563-571.
Ung, Hoameng, Kathryn A. Davis, Drausin Wulsin, Joost Wagenaar, Emily Fox, John J. McDonnell, Ned Patterson, Charles H. Vite, Gregory Worrell, and Brian Litt. “Temporal behavior of seizures and interictal bursts in prolonged intracranial recordings from epileptic canines.” Epilepsia 57, no. 12 (2016): 1949-1957.