Well, I have over 15 years practice using scanners and OCR software. Things have improved a bit over the years, but not so much as to offer 100% reliable results.
First of all, a lot depends on the input. I have just scanned two books published roughly at the same time, in the early 1990-ties, both paperbacks. I am currently using a very efficient HP scanner, with ADF and very good brightness/contrast control. One of the books scanned nearly perfectly, which means fewer than 1 typo per page, the other was a disaster, sometimes as much as 20 typos per page. The reason was that probably the publisher (OUP!) had made a wrong choice about the font/paper/ink, which resulted in rather "thick" print - in many places there's absolutely no white space between characters.
Secondly, what you get when you export to PDF is not necessarily what you think you get. HP exports to PDF through some crippled Iris OCR software, but the results you get seem perfect. Only on a deeper look inside you realize that it is done by hiding the text behind the image of each page, so you actually see an image which is a dot-to-dot copy of the original page, but when you search the PDF, the text behind the image gets searched.
Full versions of OCR software work much better, but - as I said - the results vary a lot. I am using both ABBYY Finereader and Omnipage and sometimes one, sometimes the other works a bit better (depending on the input), but generally they're both great products.
Automated PDF generation has one disadvantage: if there's any problem (like, for example some pages are misfed through ADF), you have little chance of correcting it and you get faulty output. And editing PDFs is certainly not a task for weak-hearted