View Single Post
Old 07-17-2015, 09:52 PM   #19
gmw
cacoethes scribendi
gmw ought to be getting tired of karma fortunes by now.gmw ought to be getting tired of karma fortunes by now.gmw ought to be getting tired of karma fortunes by now.gmw ought to be getting tired of karma fortunes by now.gmw ought to be getting tired of karma fortunes by now.gmw ought to be getting tired of karma fortunes by now.gmw ought to be getting tired of karma fortunes by now.gmw ought to be getting tired of karma fortunes by now.gmw ought to be getting tired of karma fortunes by now.gmw ought to be getting tired of karma fortunes by now.gmw ought to be getting tired of karma fortunes by now.
 
gmw's Avatar
 
Posts: 5,818
Karma: 137770742
Join Date: Nov 2010
Location: Australia
Device: Kobo Aura One & H2Ov2, Sony PRS-650
I quite admire Amazon for trying so hard to keep their review system as useful as possible. This is an ongoing battle that can really only be fought on the defensive: set something up, see what goes wrong, tune it, repeat.

But the approach described in the OP link raises a number of issues.


How are they establishing the relationships?

Obviously Amazon will be reluctant to reveal the details because that will just let the gamers find ways around it that much faster (you can be fairly sure ways will be found, eventually, anyway). But without knowledge of how they are doing it there is no way to assess its effectiveness.


How many fake reviews does their system actually detect? How many false positives and false negatives does it produce?

No way to guess. The social networking environment of the Internet is growing ever more incestuous, just because someone "friends" or "follows" someone else does not mean that they have a personal relationship, and a lack of following does not imply that they don't. We might assume that Amazon would only be using an algorithm if it gets it right most of the time. But what is "most"? 51%? 95%? And how would they even measure the effectiveness?


How are they dealing with the false positives?

The example in the OP link would seem to suggest the answer is: badly. But perhaps they have good reason in this case? Who knows but Amazon?

If a person goes to the effort of writing a review, and some people go into considerable detail, it is understandable that they would get upset to discover their effort has been wasted. If this algorithm goes too badly wrong then Amazon could damage their entire review system by discouraging people from submitting reviews.


And from an entirely disinterested perspective of security analysis, it would be interesting to find out how Amazon are doing this to discover what metadata is escaping the various social networks (assuming that is how they are doing it) that is allowing this sort of relationship to be calculated - because that has security implications much wider than Amazon's review system.
gmw is offline   Reply With Quote