02-23-2023, 01:52 AM | #16 |
cacoethes scribendi
Posts: 5,809
Karma: 137770742
Join Date: Nov 2010
Location: Australia
Device: Kobo Aura One & H2Ov2, Sony PRS-650
|
So current AI might be a bit primitive, but that will change. Eventually there will come the argument that even human authors are building on what has gone before, so why shouldn't an AI. The very words that we choose to express ourselves were first coined by authors of the past, even if that history is now obscure for large parts of our language. We don't automatically assume all current human authors are guilty of plagiarism, we recognise that the situation is more subtle than that. The same will become true for AIs. I don't know how close we are to that.
No, I'm not comfortable with AIs being used for this purpose, but I find it difficult to argue that there is something inherently wrong with it. Perhaps it becomes something like food packaging where we must be told the ingredients ("this novel contains 53.2% AI generated text") ... but I have no idea how you would enforce such a requirement. |
02-23-2023, 05:10 AM | #17 | |
the rook, bossing Never.
Posts: 11,166
Karma: 85874891
Join Date: Jun 2017
Location: Ireland
Device: All 4 Kinds: epub eink, Kindle, android eink, NxtPaper11
|
Might change.
The AI is just about as primitive now as 1960s. It's the size of data sets that's changed due to scraping that off the Internet. It's not affordable to do proper human curation of that, such as validating truthiness or copyright status. Part of my motivation of learning programming (intermittently from 1969 to 1981) was to be involved or develop AI. By 1982 I was learning programming anyway even though I'd learned real AI was simply fantasy. Naive people and some tech people that should have known better claimed AI just needed more powerful coimputers or that it would emerge with a sufficiently powerful and complex system. Well, like a program to play chess, a slow computer would just do slow AI, if we knew how to do it. Real AI research was mostly abandoned years ago, back in 1980s. Instead research is on networked multilayer databases (so called Neural Networks, which is marketing, no biological system is similar), how to put data in (so called Machine Learning, but no learning is involved) and pattern matching to get an output. Because the input data is so large and poorly curated there is low confidence as to the accuracy of the output or how exactly it was assembled. Watson Medical System was just using the branding of the Watson that won Jeopardy and was unrelated. It's a failure. The winning of Jeopardy is an example of something actually well suited to computing (like chess). Deciding which image is a chair is nearly impossible for an AI unless it's fed images of all the possible things to sit on at various angles. A two year old human can do far better. The so called AI paradox. We don't even understand exactly what intelligence is. People also confuse language and vocabulary. Many animals naturally have a large vocabulary, and some such as rooks, parrots, dogs, horses and chimps can learn more. Some birds or animals can also mimic sounds or human speech with out adding them to a vocabulary (starlings). No primate, parrot or rook has demonstrated use of language, only vocabulary. Brain size seems irrelevant as it's now thought rooks might be better than chimps at problem solving. Whales have big brains. No doubt chatbots and tools to generate images, text or match faces will gradually improve as they have done. Eliza, maybe the first chatbot, is nearly 60 years old. The only two major changes are having the data separate to the basic rule base and parsing engine and scraping the data from the internet. ALICE, maybe the first chatbot on the Internet is from 1995 (Artificial Linguistic Internet Computer Entity) and the main difference from Eliza was using a programming language designed for such a task (others still use it). ALICE uses an XML Schema called AIML (Artificial Intelligence Markup Language) for specifying the heuristic conversation rules. The code of Alice and Eliza is Open Source. The Linux emacs package has somewhat of Eliza Quote:
|
|
Advert | |
|
02-23-2023, 05:24 AM | #18 |
the rook, bossing Never.
Posts: 11,166
Karma: 85874891
Join Date: Jun 2017
Location: Ireland
Device: All 4 Kinds: epub eink, Kindle, android eink, NxtPaper11
|
Ironically Data in ST:TNG can't do stuff easy for computers and can do stuff we have no idea how to do. The AI in Iain Banks (Iain M. Banks on SF titles) SF is simply magic by another name and could more realistically be an alien that's coupled to a machine (a better sort of Dalek).
Often people concentrate on the nano-machines in The Diamond Age, but they are really magic by another name and a maguffin. The real point of the book seems to be an exploration of the ideas of AI and how it's a marketing lie. The "Young Lady's Illustrated Primer" of the story certainly uses nano-technology, but it's a lie because it actually uses a human mentor/actress. Indeed a tablet with Alexa, ChatGPT, Cortana, Go Google isn't much different. The web assistant bots each have a huge team of humans that review poorly or unanswered common questions and add to the system. Amazon makes almost no income from Alexa, so is reviewing staff and may cut up to 10,000 in that business division. It's not a joke that the financial model of ChatGPT is selling tools to detect it. |
02-23-2023, 05:43 AM | #19 |
cacoethes scribendi
Posts: 5,809
Karma: 137770742
Join Date: Nov 2010
Location: Australia
Device: Kobo Aura One & H2Ov2, Sony PRS-650
|
I understand that "AI" remains something of a misnomer at this point, but it's a lot easier to say that than providing a full description of a very wide field. "AI" has always encompassed a range of different things depending on who you asked. But don't get too dismissive of many current systems being pattern matching machines, that pretty much describes a large part of the human brain. Such systems are already demonstrating many of the same problems that humans face (eg: biased input leads to biased output, and output can be deliberately manipulated by manipulating the input), which leaves one wondering just what advantages an artificial intelligence may have to offer.
|
02-23-2023, 06:13 AM | #20 | |
Avid reader
Posts: 826
Karma: 6377682
Join Date: Apr 2009
Location: UK
Device: Samsung Galaxy Z Flip 4 / Kindle Paperwhite
|
Quote:
I don't understand the objection that AI can never be 'proper' intelligence because it doesn't have squishy bits, neurotransmitter chemicals etc. Planes fly but they don't flap their wings. Of course we can't define intelligence as easily as we can define flying but if I see a system that I can interact with as well as I could with a human, I'm going to call it intelligent. Yes it's all done with pattern matching but how do you think human brains work? They also have to be trained by countless hours of interaction with multiple humans to behave sensibly. Then there's the 'Chinese room' argument which I just find plain silly. Individual neurons in our brains aren't intelligent but we call the overall system a mind. I don't see why a computer shuffling bits should be any different. Andrew |
|
Advert | |
|
02-23-2023, 06:36 AM | #21 | |
cacoethes scribendi
Posts: 5,809
Karma: 137770742
Join Date: Nov 2010
Location: Australia
Device: Kobo Aura One & H2Ov2, Sony PRS-650
|
Quote:
|
|
02-23-2023, 09:25 AM | #22 | ||
the rook, bossing Never.
Posts: 11,166
Karma: 85874891
Join Date: Jun 2017
Location: Ireland
Device: All 4 Kinds: epub eink, Kindle, android eink, NxtPaper11
|
Quote:
Quote:
But nothing in computer neural networks has anything to do with how biological neurons work. It's a marketing term. Computers don't work in the same way as any brain, though we don't exactly know why brains work we do know exactly how computers work. Bits are just how numbers are stored in a computer. Everything is reduced to either bit states (Set membership, flags, boolean values), integers or clever approximation of floating point numbers using an integer for magnitude and another for the most significant digits. Images are 1 (fax or other line at a time scans), 2 or n dimensional arrays of point brightness and hue (RGB, or YUV and optionally transparency). All programs could be implemented (very slowly) in theory with paper tapes or cards and machinery using any kind of mechanical power source, though the cost and size would be prohibitive. The first modern computer that worked in a similar way to current silicon chip based computers was made in about 1939 by Konrad Zuse using mechanical relays*. Fortunately the German Government and Military wasn't interested. He went on to found a computer company in Switzerland after WWII. The Turing Test isn't important to AI, and can be beaten for a naive human. Chatbots will be improved and likely will consistently pass it. Some humans might fail. It was simply an idea and never a real test of AI, as no-one then or since could adequately define intelligence. OTOH, the Turing Machine was a serious piece of mathematics. All useful general purpose programmable computers are a more limited form of Turing machine: "Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power (Hopcroft and Ullman p. 159, cf. Minsky (1967)). They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (The Church–Turing thesis hypothesizes this to be true for any kind of machine: that anything that can be "computed" can be computed by some Turing machine.)" "The difference lies only with the ability of a Turing machine to manipulate an unbounded amount of data. However, given a finite amount of time, a Turing machine (like a real machine) can only manipulate a finite amount of data." In one sense a true Turing machine can't be built, because the Turing machine has infinite storage space. However as much storage as needed for a given real problem or program can be added. All the world's internet data on hard drives is estimated to be only the size of an oil tanker if gathered to one place. Also there is no great range of AI systems. * A relay can easily implement Not, Or and And, the building blocks of all digital computers, though Nand (And with inverted output) and Nor (Or with inverted output) are the usual building blocks. Absolutely nothing more is needed. A memory cell can be interconnected gates to make a Flip-Flop, though now a stored charge is often used, a solid state version of magnetic core stores. Last edited by Quoth; 02-23-2023 at 09:31 AM. |
||
02-23-2023, 09:44 AM | #23 | |
the rook, bossing Never.
Posts: 11,166
Karma: 85874891
Join Date: Jun 2017
Location: Ireland
Device: All 4 Kinds: epub eink, Kindle, android eink, NxtPaper11
|
Quote:
But I largely agree with most of this. Except even if we take memories, it's obvious to experts that human memory is nothing like either video recording or computer memory. There are many roadblocks to the idea (supported by Musk) of uploading memory, never mind whatever it is that makes us be, to a computer. SF isn't a blueprint. Most of it isn't even predictions or warnings, it's entertainment, a different flavour of fantasy and there is a continuous spectrum of story telling between High Fantasy and Hard SF. Also in one sense Hard SF simply ends up being an expert extrapolation of what exists now to a near future. A lot of what people think is hard SF simply has more convincing technobabble and isn't thought "hard" by any qualified person the same field. |
|
02-23-2023, 09:50 AM | #24 | |
the rook, bossing Never.
Posts: 11,166
Karma: 85874891
Join Date: Jun 2017
Location: Ireland
Device: All 4 Kinds: epub eink, Kindle, android eink, NxtPaper11
|
Quote:
|
|
02-23-2023, 10:03 AM | #25 | |
Avid reader
Posts: 826
Karma: 6377682
Join Date: Apr 2009
Location: UK
Device: Samsung Galaxy Z Flip 4 / Kindle Paperwhite
|
Quote:
Andrew Code:
Firstly, the claim that "nothing in computer neural networks has anything to do with how biological neurons work" is not entirely accurate. While it is true that the current implementations of artificial neural networks may differ from biological neural networks, there are efforts being made to incorporate more biological realism into artificial neural networks. For example, spiking neural networks attempt to model the firing behavior of biological neurons more closely. Furthermore, many current deep learning models are based on the structure and function of the visual cortex in the brain. So while there may be differences between artificial and biological neural networks, there is still some basis for the comparison. The assertion that "we do know exactly how computers work" is also not entirely true. While we have a solid understanding of the underlying hardware and software mechanisms of computers, there are still many areas of computer science that are not fully understood, such as the theoretical limits of computation and the development of algorithms for certain types of problems. The statement that "all programs could be implemented in theory with paper tapes or cards and machinery using any kind of mechanical power source" is also misleading. While it is true that any program can be represented as a sequence of instructions that can be executed by a machine, the efficiency and practicality of such implementations may vary widely depending on the complexity of the program and the capabilities of the machine. Regarding the Turing Test, while it may not be a perfect measure of intelligence, it remains an important milestone in the field of artificial intelligence. The ability of a machine to convincingly mimic human conversation is a significant achievement, and chatbots that can pass the Turing Test are still relatively rare. Furthermore, the Turing Test has spurred a great deal of research in natural language processing and machine learning, which has led to many important advances in these fields. Finally, the claim that there is no great range of AI systems is simply untrue. There is a wide variety of AI systems currently in use, ranging from simple decision trees to complex deep learning models. These systems are used in a wide range of applications, from speech recognition to autonomous driving, and are constantly evolving and improving. To dismiss the range of AI systems as insignificant is to ignore the vast amount of research and development that has gone into this field over the past several decades. |
|
02-23-2023, 12:07 PM | #26 | |
the rook, bossing Never.
Posts: 11,166
Karma: 85874891
Join Date: Jun 2017
Location: Ireland
Device: All 4 Kinds: epub eink, Kindle, android eink, NxtPaper11
|
Quote:
Sources? ChatGPT is not an authoritative source. It's gobbledegook. |
|
02-23-2023, 02:06 PM | #27 | |
Avid reader
Posts: 826
Karma: 6377682
Join Date: Apr 2009
Location: UK
Device: Samsung Galaxy Z Flip 4 / Kindle Paperwhite
|
Quote:
https://en.wikipedia.org/wiki/Spiking_neural_network https://msail.github.io/post/cnn_human_visual/ I don't know why you're so sure that artificial neural networks have nothing to do with natural ones. As far as I know that's explicitly where they got the idea from. Andrew |
|
02-23-2023, 03:09 PM | #28 |
the rook, bossing Never.
Posts: 11,166
Karma: 85874891
Join Date: Jun 2017
Location: Ireland
Device: All 4 Kinds: epub eink, Kindle, android eink, NxtPaper11
|
No, it's nonsense.
The name is inspired by the biology. They are nothing like at all in operation. It's AI industry propaganda. We don't really know how vision exactly works. It's nothing like a camera feeding a computer. A computer neural network is a kind of database structure. It doesn't work at all like biological systems which are still poorly understood. It's just "AI industry" jargon. Go do a real course in programming rather than reading industry sponsored articles. The Register https://www.theregister.com/ is more reliable than many other sources and it's only a tech news site. Last edited by Quoth; 02-23-2023 at 03:12 PM. |
02-23-2023, 06:44 PM | #29 | |
Wizard
Posts: 1,496
Karma: 11250344
Join Date: Aug 2010
Location: NE Oregon
Device: Kobo Sage, Forma, Kindle Oasis 2, Sony PRS-T2
|
Quote:
I would not AT ALL wish for a novel written by Eliza! |
|
02-23-2023, 06:53 PM | #30 |
Grand Sorcerer
Posts: 7,038
Karma: 39379388
Join Date: Jun 2008
Location: near Philadelphia USA
Device: Kindle Kids Edition, Fire HD 10 (11th generation)
|
Reading the ChatGPT passage in #25 slowly, several times, I see that.
But on first reading, not so much. I'd like to think that if ChatGPT wrote a mystery novel in the style of one of my favorite authors, I'd soon realize the plot and characterizations were only superficially plausible -- but I'm not sure. Human author plot errors get by me, so why not machine mistakes? And, getting back to #25, if I was an English composition teacher, I think it would at least get a B. |
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
CC no longer allows books to be written to device | Rowan1962 | Calibre Companion | 1 | 09-28-2020 12:09 PM |
Categories only by books written by two or more authors | mariaclaudia | Calibre | 2 | 12-17-2015 09:22 AM |
Well written, easy books to read? | EbookNovice | Reading Recommendations | 19 | 04-06-2014 08:45 AM |
Scariest Books Ever Written | MV64 | Reading Recommendations | 58 | 01-25-2013 03:11 PM |