Quote:
Originally Posted by rio
You can tell the framework that you will feed it points; so I simply "replay" the points from the .irx. The Ink engine will automatically send you the recognized text as carbon events, so you have to register for those and that's about it !
The only problem is that you are supposed to tell the engine when phrases are complete, and that seems to impact quite a bit the recognition. So far I just have a very crude way of segmenting the strokes in phrases, which kinda work but it shouldn't be difficult to come up with a vertical frequency method to classify the strokes, and that should be more robust.
|
*Reads ADC* Ah, now I get it. Quite straightforward really!
Unfortunately, if I understand you, & the Ink Services docs correctly, this would only work with actual "vector"/points data i.e. a PNG image file wouldn't work as an input. Damn.
Hopefully, one day, I'll be able to hack the V2, & be able to obtain raw data from the touchpad...
Ah well, wish you all the luck with this!