View Single Post
Old 09-08-2025, 06:29 AM   #25
un_pogaz
Chalut o/
un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.
 
un_pogaz's Avatar
 
Posts: 486
Karma: 678910
Join Date: Dec 2017
Device: Kobo
So, I see that the feature is now implemented in Calibre. I reiterate my concerns regarding the reliability of LLMs, as well as their long-term and even medium-term sustainability and support, but at least, at least, it implemened in a way where is a completly independent opt-out feature that the user need to explicitly enable by their own, will run only when requested by them, and not deeply intricated into the all code.

Beside, "opt-out" and "run only when requested by the user" realy need to be highlight on the future changelog, because oh boy that the IA/LMM can be very impopular, so if your ambiguous and not clear about, it could be quite badly received.

Additionally, I recommend adding two warnings:
  • First that the LLM feature use third party service and so to be carefull to the data send to them. Since this kind of service need a lot of data that could be private, it a warm more important than for others.
  • And a warm that the answer provide by the LLM is not fully reliable, and so need to be take with caution. I don't understand what fool will believe trustly a procedural random text generators, but it seems to be the trend so I reminde seem important.
un_pogaz is offline   Reply With Quote