Quote:
Originally Posted by mrdini
*Reads ADC* Ah, now I get it. Quite straightforward really!
Unfortunately, if I understand you, & the Ink Services docs correctly, this would only work with actual "vector"/points data i.e. a PNG image file wouldn't work as an input. Damn.
Hopefully, one day, I'll be able to hack the V2, & be able to obtain raw data from the touchpad...
Ah well, wish you all the luck with this!
|
I'm not sure I understand your point about the V2 ? Here's what I do: I have a simple script on the iliad that creates me a new "notepad" -- a simple folder with a manifest indicating that we're using a specific background PNG image (the gray horizontal lines shown on my screenshot). When I open this newly generated notepad I can scribble on it, and the scribbles are saved in the newly created "notepad" folder/document as a .irx file.
This irx file is a simple xml-encoded file containing the raw data from the scribbles. What I do then is I get the file on the mac, parse it to extract the data points, and use those along with the background png image to re-render the note in a cocoa view (which then allows me to print it or generate a pdf of it).
The recognition phase simply consist in feeding the same data points that to the Ink HWR engine, and I get back the recognized text. An additional thing to do is to segment the points in phrases, and so far my method is really crude and I want to implement a better segmenting tool.
So to sum it up, I don't see anything here that wouldn't work on the V2

(particularly considering I'm using the same OS version!)
edit: hum, maybe I just completely misunderstood you and the V2 is some kind of other tablet-like hardware, not an Iliad V2, that would exports notes in PNG... then yes if that's the case, it won't be easy. Although you can try to vectorise the PNG (should work quite well) and use the vectorised points to feed the Ink engine. But that's certainly a lot more work!