Pages

Sunday 26 July 2015

Sensor Glucose on the 640g Part 3: Final SG

Home straight now - in this post, I'm assuming you've read / skimmed through the first two parts of this journey: ISIG calculation and Calibration calculation. As with the previous bits of this puzzle, this is my best guess...

So we now have ISIG and the calibration factor sewn up and it should be a short "hop" to generate the Sensor Glucose (SG) value on the screen... Well, almost there. In theory, you could take the ISIG value (nA) and the calibration factor (mg/dl/nA) [internally, the 640g works in mg/dl] and hey presto, your SG value (mg/dl) is ready, right?


Except we still have potentially significant noise floating around in each (5 minute) ISIG reading. If you took the calculation as above, you are likely to see areas of SG history where short-term (5-10 minute) fluctuations in SG are observed: the snip below shows the Medtronic (red) and "raw" calculation (blue) - ignore the offset (due to the calibration), but look at the magnitude of the "spikes" between the two datasets.


Although, significant changes in SG can occur over a short time period (think post hypo treatment for example), they are not usually followed by a rapid reversal - such 'high frequency' behaviour is likely to be noise and not to represent real glucose levels. Whilst it could be removed using an averaging filter (for example, taking the average of the last two or three SG readings as crude method), such smoothing may mask out recognition of significant and real changes in glucose levels. By definition such smoothing filters, when used in real-time, delay picking up changes in signal, which is not a good thing in a system which already has a ten minute or so delay (between BG and SG readings).

Clearly an "average" is still required, but some sort of crystal ball is required to make an educated guess as to the likely, real SG reading right now. One option is to use the Kalman Filter and that's the approach I've used here. Originally developed in the 1960s, it uses previous measurements, together with your best 'guess' at the current measurement (the blue line above) to calculate the 'true' current value. I won't go into the mathematical detail here - there's a good introduction to the technique on Bilgin Esme's blog if you're interested. It was originally applied in calculating the true trajectory and position of the Apollo spacecraft as they journeyed to and from the moon, but has been used in a myriad of situations since, including clinical applications.

Essentially, the Kalman filter requires a couple of parameters that control the estimate of measurement (from the sensor) and process (full calculation) error (noise). Values that translate to around 0.25 - 0.5 mmol/L SG give a good fit to the Medtronic data. Using the example above, the green line represents the Kalman filter snip, compared to the Medtronic data in red.




So after all that, what have we got? Well, we have SG values that typically agree to within a couple of percent over the calculation period (up to five sensors, 20,000 data logs currently). Two examples are shown below, both on fairly challenging days in terms of highs and lows... Not perfect, but close.



Please remember, what I've tried to do is to match the Medtronic calculation - I've not tried to improve / optimise the calculation. You can see examples above of both good and bad tracking on behalf of both algorithms when compared to the BG data (Green dots for each reading, orange dots at calibration).

What happens if you run the "home-brew" Veo calculation on raw 640g data?
On the 640g data I have from users who have had problems, there is an improvement in MARD by moving to the "Veo" algorithm. One example is given below, where tracking leading up to a BG reading (and then a calibration point near the top of this graph) is improved:

 
"640g" (Left) and "Veo" (Right) Calculations (Green) and Medtronic SG (Red) with 640g data

However, overall, the improvement in MARD is not sufficient to move these datasets into the Enlite specification zone (i.e. less than 15%) across the entire dataset. To my mind, this suggests that whilst the 'new' (640g) calibration regime has certainly not improved the sensor performance in these patients, the key issue appears to be locked in the ISIG dataset coming from the G2L... Unpicking those differences needs more digging.

Still to come, a breakdown of where the sensor errors are coming from...

No comments:

Post a Comment