Quote:
Originally Posted by Xennex1170
It's not so much the discussion as you seem to have no idea of the difficulty of implementing certain features. You made it sound as if drag-and-drop was so easy it could be done with little effort. I presented an algorithm of a reasonable way to implement drag-and-drop on the current device (Kobo Touch). You can see implementation is not trivial. I wanted to see what kind of design you had which made you think it was so simple to implement. That is all. 
|
I already said that in order to know how to implement this on this system, I believe we need to know more details about the hardware and software that we (or at least I) do not know. But, okay, we can speculate a little more even though there is little point to adding such a feature.
You say you "presented an algorithm". But only very vaguely. It does not translate directly into, as you say, "the current device (Kobo Touch)". What is actually done will depend upon the hardware and software. You mention "layers" for example. What are those, exactly? Normally in a sophisticated graphical gui system one has widgets and the widgets are drawn in locations and in layered form. The layers are conceptually encoded into the entire tree structure of widgets as it is created. You would have one widget for the container, and book widgets drawn on top of it. Drag and drop functionality is included in most gui programming systems, so that once you start a drag operation, appropriate messages will be sent to all the widgets when they are exposed and they would take care of redrawing themselves. So your "algorithm" would be distributed amongst the components of the gui. In fact I downloaded and tested
this example on my own system, but this is a richer Qt that the Kobo has, so perhaps you are thinking that in a more primitive environment you will manage "layers" more explicitly and directly yourself. Then we still have to know about the environment. The Kobo uses a version of Qt Embedded, but probably a version that doesn't have drag and drop functionality in it. Perhaps the home screen is represented directly as a canvas with all the widgets being hand-rolled lightweight ones. Then, when a move mouse event is detected once a drag is started, yes, one will have to redraw the book in a new position, and redraw whatever was underneath the book before. I don't think that is going to be very difficult, and probably not so slow as to be unusable, although that is a matter of personal opinion of course upon seeing the actual result. But the code is very simple, there will be, I think, three rectangles to blit from a pixmap of the screen as it was: the book, and two rectangles which are uncovered. The cost of drawing them would probably not be so bad. One should only do a redraw if there was a minimal move above a certain distance, otherwise finger stutter might cause some ugliness.
You were concerned enough about there being five book sizes to bring that up. Well, the book has to be the one size by the time it gets down to the bottom, so it is perhaps best to make it that size at the beginning of the drag (or at the drop, if you are dragging the other way). That is not much of an issue as far as I can see.
If it turns out that blitting these three rectangles is too slow and ugly, one could always just use a wire-frame such as the one used in the pdf drag which indicates what part of the entire page is currently being described. Note how, simultaneously with the drag of the entire pdf image, this little rectangle is cleanly updated without any problems. In the old days, this is how windows were dragged and dropped around desktop screens.
I don't see how the code to do all of this is going to be very complicated. Also I doubt it would be so inefficient as to be really ugly (but that is still speculation and a matter of opinion in the end when the result is seen. Maybe it would be too slow for most people). However I believe it would still be a waste of time given all the other things there are that can be done. I think it is pretty silly to be discussing this based on an incidental comment I made (and now, as much time spent on this as I am going to!)
Therefore coming back to reading pdfs: it might make more sense to complain about the requests for "pdf reflow" which suggest it is just one more thing akin to various small requests. This is something that, aside from being ill defined, is likely to be very complicated. Pdf markup does not say what is a header, a footer, a footnote, etc. But to reflow a pdf, one is going to have to detect these things and handle them differently. I have been looking at the code in pdftohtml and it is pretty complex. To assemble a page it finds each little piece of text that is drawn, then assembles them into a structure which it sorts and then tries to coalesce into a linear string of text so that it can extract the text. A lot more complex than dragging a little image. It throws away the position information when it does this linearization so the structure of headers, footers, footnotes, and even margin notes (not that common but for example, the A/B pagination in the Critique of Pure Reason) is thrown away. I think the spatial information could be used to make good guesses at this structure, but this would be pretty complex in general. (Special cases that cover a lot of typical books will not be as hard).
So I do believe I have some idea about "the difficulty of implementing certain features" at least in this case. That is one main reason why I would just like to see pdf reading implemented not by reflow, but by zooming to the maximum in portrait mode and with easily-implemented tap-to-advance-in-document commands. Such a zoom is probably big enough for most people to read. No panning required for typical books.