5

According to Thomas S. Kuhn in his classic work, The Structure of Scientific Revolutions:

...'normal science' presupposes a conceptual and instrumental framework or paradigm accepted by an entire scientific community ... [T]he resulting mode of scientific practice inevitably invokes 'crises' which cannot be resolved within this framework...

...the analytical thought experimentation that bulks so large in the writings of Galileo, Einstein, Bohr and others is perfectly calculated to expose the old paradigm to existing knowledge in ways that isolate the root of crisis with a clarity unobtainable in the laboratory.

For the 70 years since inception, AI has made no significant progress towards its original goal of human-like general intelligence in a machine (electronic digital computer). Forty years ago John Searle first published his Chinese room thought experiment that (along with associated arguments) concludes that the computer for a fundamental reason is incapable of human-like intelligence. The argument still stands - plenty of attempted rebuttals but none widely accepted as successful.

Does the Chinese room thought experiment "expose the old paradigm [computationalism] to existing knowledge in ways that isolate the root of crisis with a clarity unobtainable in the laboratory"? Is AI in a Crisis of Science? Will it only make progress towards AGI when it adopts a different and better paradigm for understanding the device it calls the computer?

J D
  • 19,541
  • 3
  • 18
  • 83
Roddus
  • 629
  • 3
  • 14
  • 2
    "For the 70 years since inception, AI has made no significant progress towards its original goal of human-like general intelligence in a machine" False. In the past 10 years there have been huge strides forward in machine learning. AlphaStar allows computers to play a complex strategy game that for a long time had been the holy grail of reinforcement learning. GPT3 allows computers to generate realistic text based on a writing prompt. – causative Apr 29 '21 at 07:13
  • But there are the serious problems of edge cases, adversarial attack, noisy datasets and catastrophic forgetting. And the gazillion iterations of back propagation. I know ML has seen great progress in limited domains and achieved the much-awaited commercial success. But this is still in limited domains. The issues edge cases and adversarial attack seem fundamental (small sticker added to STOP sign prevents "recognition" as a stop sign, etc.). – Roddus Apr 29 '21 at 07:22
  • 3
    There remain flaws, but it's still very significant progress. – causative Apr 29 '21 at 07:28
  • 4
    Systems and virtual minds replies to the Chinese Room are pretty broadly accepted among AI researchers. In that field, at least, CR is not taken seriously for a while now. It is similar to (and correlates with) attitudes towards the hard problem of consciousness, those of the more scientific persuasion do not see it as cogent. It is not that the argument still stands, but rather that it became clear that it [depends on certain articles of faith](https://iep.utm.edu/chineser/#H5) that are perennially unresolvable, and is orthogonal to progress or lack thereof in the AI field. – Conifold Apr 29 '21 at 07:57
  • Maybe AI is not a "full fledged" science based on an unifying theory (like e.g. Newtonian mechanics and Relativity) but it is still a [Research Programme](https://plato.stanford.edu/entries/lakatos/#FalsMethScieReseProg1970) – Mauro ALLEGRANZA Apr 29 '21 at 09:28
  • 4
    "The computer for a fundamental reason is incapable of human-like intelligence" is not something that Searle believes or was trying to argue, unless by "the computer" you only mean the particular computer in the Chinese room argument. He was only arguing that you can't take a purely functionalist approach to deciding whether a machine understands what it's doing. Even if correct, the argument doesn't imply that we can never make a machine that genuinely understands Chinese. – benrg Apr 29 '21 at 17:15
  • "Systems and virtual minds replies to the Chinese Room are pretty broadly accepted among AI researchers." But not among philosophers. I don't believe I've ever encountered anyone who is not a committed materialist who finds those replies convincing. – David Gudeman Apr 29 '21 at 18:18
  • 1
    @DavidGudeman - David Chalmers is not a materialist, he coined the term "hard problem of consciousness" to argue the metaphysical inexplicability of qualia in terms of purely physical facts, but he does think all the *behaviors* associated with consciousness have physical explanations. He further postulates "psychophysical laws" linking physical process to qualia, and he [argues](http://consc.net/papers/qualia.html) these laws would likely have the property that computationally identical processes would give rise to the same qualia, which would include the computations in the Chinese Room. – Hypnosifl Apr 29 '21 at 19:50
  • (cont.) Specifically, in his book *The Conscious Mind*, he discusses the Chinese Room argument starting on p. 323, imagining a "demon" in the room doing the calculations, and says on p. 325 "Once we look past the images brought to mind by the presence of the irrelevant demon and by the slow speed of symbol shuffling, however, we see that the causal dynamics in the room precisely reflect the causal dynamics in the skull. This way, it no longer seems so implausible to suppose that the system gives rise to experience." – Hypnosifl Apr 29 '21 at 19:53
  • @Hypnosifl: Good point. I should have said "materialists or panpsychists". – David Gudeman Apr 29 '21 at 20:35
  • @DavidGudeman - Someone could accept the idea of psychophysical laws, and the idea that the are computation-dependent rather than depending on specific types of matter (which I think is what Searle believes), but still think only certain special computations give rise to conscious experiences. I suspect though that most people who reject outright the possibility of computation-dependent psychophysical laws are interactive dualists of some kind, i.e. people who don't actually think the laws of physics can fully explain human behavior. – Hypnosifl Apr 29 '21 at 21:13
  • 3
    You might be interested in: a) [What computers can't do](https://archive.org/details/whatcomputerscan017504mbp) and b) [What computers still can't do](https://www.semanticscholar.org/paper/What-Computers-Still-Can%27t-Do-McCarthy-Dreyfus/943f41c125e62bbdf9d15fa0d6ff8d406c640d77) – Nikos M. Apr 08 '23 at 14:32
  • Go to Montana, look for Mr. Smith. If and when you find him (give yourself 10 years for the search), ask him where he saw the girl who .. just ask him about *the* girl. – Agent Smith Apr 08 '23 at 17:54
  • I think someone proved that bumblebees cannot fly. – Scott Rowe Apr 09 '23 at 01:37

5 Answers5

4

Your question appears to be ill informed.

Neural networks. Image processing, through layered convolutional neural networks. Natural language processing by Watson (able to beat humans at Jeopardy). Deep Blue, Alpha Go, and Alpha Zero, able to beat humans at some of our most complex games. Tegmark-&-Wu's AI Physicist.

These are all substantial steps, proven in practice.

What we have discovered, is that what our brains do is a lot more complex than we thought. Image processing in particular, turned out to be a lot harder than expected initially.

It's important to distinguish between Artificial Intelligence, which is already ubiquitous, and Artificial General Intelligence or synthetic sentience, which we just don't know when will be possible - it has seemed 'a few decades away' for probably at least a century.

Hofstadter's strange-loops model alone can potentially account for minds being different to Chinese-rooms. Discussed here What is intelligence?

Personally, I am with Penrose-Hammeroff & OrchOR. That interpretation does not necessarily require quantum effects, but it does involve emergent dynamics ('orchestration').

On the paradigms, from my post in that linked discussion:

There is a powerful tendency for people in science and computing to think there is nothing very interesting or special about human minds. And unfortunately, a powerful strand in philosophy (& theology) which says there is something so special about them, scientists aren't on track to figuring them out - the 'qualia' idea and the so called Hard Problem Of Consciousness. I strongly recommend not joining either camp. The story of physics has been from thinking we were a few results away from explaining everything in 1900, and now we don't know what 95% of the universe is made of - our greatest progress has been to begin understanding the scope of our ignorance. I feel strongly we are on a similar trajectory about intelligence.

Your question is like saying, 'There hasn't been much progress in physics lately, so probably we won't be able to explain everything'. Ie, both wrong, and misguided, in a way that people familiar with the subject will have very little patience for.

Guy Inchbald
  • 2,532
  • 3
  • 15
CriglCragl
  • 19,444
  • 4
  • 23
  • 65
  • 2
    "Comically ill informed" is a violation of the rules of this forum. An insult in high-brow language is still an insult. – David Gudeman Apr 29 '21 at 18:10
  • 1
    I have edited it down. – Guy Inchbald Apr 29 '21 at 18:53
  • CriglCragl mentions Watson winning at Jeopardy as evidence of general intelligence in current computers. I don't suppose it would help to note that Watson didn't "know" that the wife of a US president was female. – Roddus May 16 '21 at 23:16
  • @Roddus: Arguing natural language processing isn't progress, is like arguing image processing isn't progress. – CriglCragl May 17 '21 at 12:35
  • @CriglCragl “Arguing natural language processing isn't progress, is like arguing image processing isn't progress”. If NLP is statistics on a mega data set then I'd say it's not progress. Work was done on this about 30 years ago by Eugene Charniak. The questions asked of the machine tested generality, or abstraction. I think his conclusions still stand. – Roddus May 19 '21 at 00:46
  • @CriglCragl for image processing, I'd say current image processing isn't really much progress. Human vision seems vastly different from anything a computer now does. Just the saccades, for instance. I had a look at these and they seemed crucial to the organic process. Of course that doesn't mean they're necessary. But it looked like they might be. – Roddus May 19 '21 at 00:47
  • This structure seems to be analogous to how actual biological systems extract and process image components https://en.wikipedia.org/wiki/Convolutional_neural_network It's a whole area to dive into. Saying you read someone from 30 years ago so you don't need to pay attention to developments, sounds pretty weak. Saccades are fine, but it's like how our balance system works, as I understand it - we keep twitching around the balance point so we have data to react to, rather than aim for stasis. Same how hearing aids enhance hearing. – CriglCragl May 19 '21 at 00:59
  • Fusion is always perpetually 30 years away, but the sun gets its light here in 8 minutes. The "crisis" proffered is not so much a crisis, but the slow awakening that PSSH and LoT are wrong. From the invention of Daedalus and Icarus, to da Vinci's sketches, to the Wright brothers was thousands of years. From the difference engine to ENIAC was closer to 100. From A&N's GPS to IBM Watson was 30 years, and 30 more years gives us Boston Dynamics and autonomous cars. The first book I'm aware of that is explicitly the philosophy of computer science only stretches back 20 years... – J D Apr 08 '23 at 20:49
  • and solid books on physical computation and embodied cognition go back only 10 years. Sam Altman in an interview with Lex Fridman talked about the founding of OpenAI being derided 7 years ago, and ChatGPT 4 is the best LLM to date. Against the background of 3.7 billion years of evolution, I'd say to claim 75 years is a long time presupposes a certain insularity in thinking. – J D Apr 08 '23 at 20:55
  • "and Artificial General Intelligence or synthetic sentience, which we just don't know when will be possible" -- In other words *you just agreed* with the point you called ill-informed. There has been ZERO progress toward AGI in the last 70 years of AI hype and hope. You actually agreed with this! – user4894 Apr 09 '23 at 18:19
  • @user4894: We don't know when we will have fusion power. Does that mean there has been no progress towards it? Your point doesn't make sense. – CriglCragl Apr 10 '23 at 05:45
  • @CriglCragl There has been progress toward fusion power. There has been no progress toward AGI. If AGI is ever achieved, it will not be through statistical language models or neural nets that amount to clever data mining. – user4894 Apr 27 '23 at 00:44
  • @user4894: I've made my argument. If you disagree with my points, *in your own answer*. I have taken the trouble to explain why I think your points in your comment are obviously wrong. So just saying 'yeah but no' with no substance to back yoir points, is discourteous. ChatGPT alone shows substantial progress. – CriglCragl Apr 27 '23 at 01:02
  • @CriglCragl You *asked me a question* ("Does that mean there has been no progress ...") and I answered it, and then you complained that I had the temerity to address you. – user4894 Apr 27 '23 at 01:06
  • @user4894: 'The real problem - It looks like scientists and philosophers might have made consciousness far more mysterious than it needs to be' https://aeon.co/essays/the-hard-problem-of-consciousness-is-a-distraction-from-the-real-one – CriglCragl Apr 27 '23 at 11:41
2

For CriglCragl and the common view expressed, I think CC is actually demonstrating that AI is indeed in a crisis of science. CC says the idea that AI is in a crisis of science is:

both wrong, and misguided, in a way that people familiar with the subject will have very little patience for.

(I presume CC's original opening line “"You question appears to be comically ill informed" indicates that they fully believe they are indeed among those familiar with the subject.)

But as everyone knows, a crisis of science comes about precisely because a research programme has failed to make fundamental progress, and a thought experiment exposes that failure in ways that isolate the root of crisis with a clarity unobtainable in the laboratory.

CC agrees there is a fundamental failure:

Artificial General Intelligence [-] we just don't know when [it] will be possible

If CC understands the CRA then they know it exposes this failure by appeal to the fundamental nature of computation. (No one seems to be denying computation is the purely syntactic manipulation of symbols without accessing their meanings.)

And the third ingredient of any crisis of science is that the Illuminati can't conceive of themselves as being wrong; and because of their psychological dependence on unbridled falsehood react emotionally to any suggestion of error.

So I think CC's comment which expresses a common view within AI is quite a clear answer to my question: YES, AI is in a crisis of science.

About the CRA, this is the key thing to me, and it addresses some other of the comments above. The CRA can be boiled down to just one issue: the intrinsic meaninglessness of the symbol. If you say computers manipulate symbols and do nothing else (as Searle does) and given that symbols in themselves say nothing about what they mean, then the computer is forever a prisoner in a universe of meaningless syntax and formality. The system reply, the many mansions reply, the robot reply, all the replies are beside the point. Unless the inherent meaningless of the symbol can be overcome, computers will never be intelligent. This is Searle's core position and I think is clearly a crisis.

Just for the sake of completeness, I've been given a -1 score. Because of this sort of unhelpful and I have to say in a sense arrogant thing I gave up on PSE a while ago and switched to academia.edu. A few days ago I posted exactly the same question there in the form of a short paper. The second respondent was Pat Hayes, AI royalty, who answered with well-thought-out and constructive arguments, as of course one would expect. In all the comments there was no beating of the hairless chest. I'm sure PSE gives good guidance to novices, but I think it's simply wrong to expect balanced debate (Crigl please take note).

According to the voting balloon, a negative vote means "The question does not show any research effort". So (to cut and paste some recent relevant research effort):

...despite seven decades of prodigious funding and effort, nothing approaching AGI has been demonstrated. Instead, serious practical and theoretical difficulties have arisen including those known as the problem of design (1), the problem of machine translation (2), the frame problem (3), the problem of common-sense knowledge (4), the problem of combinatorial explosion (5), the Chinese room argument (6), the infinity of facts (7), the symbol grounding problem (8), and the problem of encodingism (9). And for "deep learning": edge cases (10), noisy data-sets (11) adversarial attack (12) and catastrophic forgetting (13).

  1. Ada Lovelace, 1843, "Note G", quoted in "The Turing Test," Stanford Encyclopedia of Philosophy, section 2.6, https://plato.stanford.edu/entries/turing-test. Also see John von Neumann, "First Draft of a Report on the EDVAC," (Moore School of Electrical Engineering, University of Pennsylvania, 30 June 1945), 1
  2. Yehoshua Bar-Hillel, "The Present Status of Automatic Translation of Languages," in Advances in Computers, ed. Franz L. Alt (Academic Press, 1960), 1: 91-163.
  3. J. McCarthy and P. J. Hayes, "Some Philosophical Problems from the Standpoint to Artificial Intelligence," in Bernard Meltzer and Donald Michie (eds.) Machine Intelligence 4 (American Elsevier, 1969), 463-502. Also see Dreyfus, "Alchemy and Artificial Intelligence," 29, 39, 68.
  4. Hubert L. Dreyfus, "Alchemy and Artificial Intelligence," (The RAND Corporation, P-3244, December 1965), 39.
  5. James Lighthill, "Artificial Intelligence: A General Survey," section 3 Conclusion, in Artificial Intelligence: A Paper Symposium (Science Research Council of Great Britain, July 1972). Also Dreyfus, "Alchemy and Artificial Intelligence," 39.
  6. John R. Searle, "Minds, Brains, and Programs," Behavioral and Brain Sciences 3, no. 3 (1980): 417-457.
  7. Dreyfus, "Alchemy and Artificial Intelligence," 39. Also Daniel C. Dennett, "Cognitive Wheels: The Frame Problem of AI," in The Robot's Dilemma: The Frame Problem in Artificial Intelligence, ed. Zenon W. Pylyshyn (1984; Ablex, 1987), 49.
  8. Stevan Harnad, “The Symbol Grounding Problem,” Physica D 42 (June 1990): 335-346.
  9. Mark Bickhard and Lauren Terveen, Foundational Issues in Artificial Intelligence: Impasse and Solution (Elsevier, 1995).
  10. Overview of issues: Philip Koopman, "Edge Cases and Autonomous Vehicle Safety," (paper presented at the Safety-Critical Systems Symposium, Bristol UK, 7 February 2019).
  11. Overview of literature: Gupta, Shivani and Gupta, Atul, "Dealing with Noise Problem in Machine Learning Data-sets: A Systematic Review," Procedia Computer Science 161 (Elsevier, 2019): 466-474.
  12. Survey: Anirban Chakraborty et al., "Adversarial Attacks and Defences: A Survey," (arXiv:1810.00069v1, 28 September 2018).
  13. James Kirkpatrick et al., "Overcoming Catastrophic Forgetting in Neural Networks," Proceedings of the National Academy of sciences 114, no. 13, (2017): 3521-3526.
Roddus
  • 629
  • 3
  • 14
  • We don't know when we'll understand dark matter. That doesn't mean it can't be explained. Are we in a 'dark matter crisis'? No. It's just and open question. Disproving the luminiferous ether, or the ultraviolet catastrophe, they were crisees - & not due to thought experiments. Penrose exactly developed OrchOr to address issues you mention. You get minuses for asserting daft things like there's an emotional denial of the issue by 'illuminati'. It seems you have philosophers hubris. Wider discussion here https://philosophy.stackexchange.com/questions/33555/free-will-and-intelligence/52457#52457 – CriglCragl May 11 '21 at 22:07
  • 1
    @CriglCragl Well, we don't understand intelligence either, but that doesn't mean it can't be explained. But what it does mean is that computation is not an adequate explanation. If it was then instead of 70 years of abject failure to demonstrate human-like general intelligence, there would have been more or less steady progress. David Deutsch reiterates this in his 3 October 2012 Guardian article. Asserting there is an emotional denial by the luminaries isn't a daft thing, and has been made by many. How else do you explain 70 years of abject failure, yet continued embrace of computationalism? – Roddus May 15 '21 at 01:24
  • I find your choice of words childish and irritating, it is far from Socratic dialogue, it's just chippy egotism. What I'm really interested in at the moment is VonNeumann's generalisation of Turing machines to universal constructors, with implications for the topological transformations they are capable of. I lost faith in Deutsch a long time ago. But Chiara is a boss https://www.quantamagazine.org/with-constructor-theory-chiara-marletto-invokes-the-impossible-20210429/ – CriglCragl May 15 '21 at 01:30
  • 1
    @CriglCragl Putting aside ad hominem issues, what about the key question: if computationalism is right, why is there still, after 70 years, no idiomatic conversation, ability to generalize, or common-sense knowledge? This is really a fundamental issue. The second key question being, if symbols in themselves are meaningless, how could a purely syntactic device such as a computer understand what it manipulates? – Roddus May 16 '21 at 22:56
  • 1
    Because those things aren't fundamental, they are high level and emergent. Your stance is like saying, making insects from scratch wouldn't be progress on how to make humans. There isn't a discontinuity between us and insects, only gradual improvements. Meaning and concepts emerged from biological soup, and will do from electron soup. – CriglCragl May 17 '21 at 12:40
  • 1
    You might be interested in: a) [What computers can't do](https://archive.org/details/whatcomputerscan017504mbp) and b) [What computers still can't do](https://www.semanticscholar.org/paper/What-Computers-Still-Can%27t-Do-McCarthy-Dreyfus/943f41c125e62bbdf9d15fa0d6ff8d406c640d77) – Nikos M. Apr 08 '23 at 14:32
  • 1
    @Nikos M. I agree that both those Dreyfus' books (and the Dreyfus and Dreyfus book) are important. You might know that H. Dreyfus was at MIT with Minsky etc. and after his "Alchemy and Artificial Intelligence" he was punished with the greatest sanction possible: no longer being invited to lunch (petty, but I think it revealed insecurity and failure). I think Heidegger is basically right about several things, but also I think that his conception of being "in" the world or "of" the world can be realized in a computer as a theory of mind, but not computationally. – Roddus Apr 10 '23 at 23:42
2

If you don't mean in some weaker sense of challenging basic terms in AI research ('consciousness') then the onus is on you to show that there is Kuhn's "incommensurability" between the old and new AI

Newton’s theory was initially widely rejected because it did not explain the attractive forces between matter, something required of any mechanics from the perspective of the proponents of Aristotle and Descartes’ theories (Kuhn 1962, 148). According to Kuhn, with the acceptance of Newton’s theory, this question was banished from science as illegitimate, only to re-emerge with the solution offered by general relativity. He concluded that scientific revolutions alter the very definition of science itself.

https://plato.stanford.edu/entries/incommensurability/

The upshot is that the pre and post revolution scientist are using different languages with the same terms, so that

revolutions change what counts as the facts in the first place.

In effect I know nothing about AI, but I am skeptical. Given we've more or less debunked the Turing Test, you might disagree. But is deep blue any more or less sentient than it was?

  • I agree Newtonian physics altered science itself in that the causation-by-contact doctrine of the traditional mechanical model was dismissed by Newtonian forces. But postulating then empirically supporting the existence of forces (whatever they are) seems within the same "definition" of science. It was the likes of Popper who altered the definition of science. At present, psychology and AI (Russel & Norvig etc.) are using same terms with very different meanings (eg "reasoning", "representation", "perception"), but without pre- and post-revolution. AI likely needs new terms with new meanings. – Roddus Apr 11 '23 at 00:03
  • Maybe @Roddus . As I said, I know in effect nothing about AI. I've read Kuhn, hence my answer. And I hope it was helpful. Cheers –  Apr 11 '23 at 00:06
1

You are right, there is a crisis in science, which reflects the crisis of human understanding. Consider this argument by Daniel Dennett:

enter image description here

It proposes that we humans too can rely on the Chinese room in our brains to live our lives without much understanding.

This is possible because our psyche consists of two minds. Daniel Kahneman of "Thinking, Fast and Slow" referred to the two as System 1 and System 2. Mark Manson of "Everything is f*cked: A book about hope" referred to them as the Feeling Brain and the Thinking Brain respectvely.1

System 1 mostly lives in subconsciousness. It is responsible for learning John Locke's "simple ideas" and intuition, and it communicates to us through feelings. Its design is of a machine learning AI (of the Chinese room).

System 2 can understand by discovering interactive models of the Reality ("complex ideas"). That's what the real science is about -- Newton discovering the universal gravity in his imagination, Copernicus and Galileo discovering modern cosmology, Steven Hawkins discovering how black-holes evaporate -- even though to this day no one has seen one.2

Machine learning AI, therefore, simulates human intuition. And the irony is that we, in our current state, are not that different. We still don't know how to teach everyone, consistently and reliably, the art of understanding. So, in a typical person, System 2 is not working all that well, forcing them to rely, instead, on the intuitions of System 1. This -- the crisis of understanding -- has been repeatedly identified3 as the problem at the root of all evils.

 

1 Many older sources describe the same dichotomy. In Sigmund Freud's model, System 1 is id/superego and System 2 is ego. This, in turn, matches the Socrates' Chariot Allegory -- id being the dark horse, superego the white one, and ego as the charioteer, such as it is. Buddha used the parable of a person (System 2) riding an elephant (System 1) to describe the same idea.

2 That's why Hawking was never awarded a Nobel Prize. And neither was Einstein, at least not for his discovery of General Relativity. Newton wouldn't be awarded either. That reflects of our misunderstanding of what the real science is about.

3 Take Socrates alluding to knowledge being the only true virtue, for example. Or Spinoza.

Yuri Zavorotny
  • 590
  • 2
  • 10
  • Interesting discussion. However, we ran into limits to understanding trying to build computers and algorithms based on our algorithmic virtual system 2 reasoning. The neural net analog programming of our most recent AI, is an effort to replicate the non-algorithmic intuitions of our unconscious system 1. So we now HAVE both systems in our AI, but our computers still don't have understanding. Nor do our computers have consciousness. Getting both computational approaches still doesn't yield understanding or consciousness. – Dcleve Apr 08 '23 at 18:11
  • That's true, classic computers didn't have any understanding of their own: Their algorithms encoded the understanding of their programmers. And while it should be possible to design an AGI that would discover its own understanding, to me that's a mute point. Why focus on AI when we have our own potential largely untapped? As it stands, relatively few individuals manage to attain a more comprehensive understanding of the world (e.g. Socrates, Jesus, or Nietzsche) and most of them are traumatized by their aloneness. By living in the world that neither understands, nor seems to care. – Yuri Zavorotny Apr 08 '23 at 18:54
  • +1 "Machine learning AI, therefore, simulates human intuition." Connectionist models are a class of their own removed from symbolic systems. – J D Apr 08 '23 at 20:38
  • Yes, connectionist models (deep neural networks) are a separate part of human cognition. That's why, "Much learning [System 1] does not teach understanding [System 2]; otherwise, it would have taught Hesiod and Pythagoras, and again Xenophanes and Hecataeus." (Heraclitus, 450 BC) – Yuri Zavorotny Apr 08 '23 at 23:45
  • 1
    … which is to say that the art of understanding (of System 2 operation) should be taught explicitly. Otherwise it’s touch and go. – Yuri Zavorotny Apr 09 '23 at 00:40
  • 1
    Why focus on developing airplanes when most people can't run very well? I think that AI is just completely separate from whether people are educated properly or not. We can't wait until everyone has enough of everything before developing something new, or we would still be trying to get enough whale oil before creating the electric light bulb. Are light bulbs helpful? Did they do more good than harm? – Scott Rowe Apr 09 '23 at 01:53
  • I don't think your analogies/comparisons of mental systems hold. Newton could have won multiple Nobel Prizes, like for inventing the reflecting telescope. his work on optics, on heat. Hawking Radiation is still a theory, & there won't be net outflows predicted from blackholes until the universe is much cooler (~10^19 times current age of the universe for a 1 solar mass blackhole). – CriglCragl Apr 27 '23 at 11:55
1

The weakness in the argument presented by the OP resides squarely here:

AI has made no significant progress towards its original goal of human-like general intelligence in a machine.

By what measure can we conclude no progress has been made? Cars can drive themsevles, Boston Dynamics has robots that hang drywall, ML systems can decide fallibly that it's looking at a picture of a dog, and ChatGPT now will dispossess a new generation of human labor because it can create language products that used to require flesh and blood persons. Should we take this claim as definitive because computers haven't seized control of society marching us off to permanent detention center meant to harvest the electricity from our bodies like in the Matrix??? Nonsense.

AI continues to make progress at aspects of human-level intelligence, and every aspect that is accomplished indeed contributes to putting together a system that approaches functionality that can be described as human-level. There's a certain homunculus-like presumption at play here that there is something magically human inside the brain that is not present inside of machines, and that no amount of aggregating computation will every approach that elan vital. This is the same feeble attack that the Chinese Room levels at human intelligence by trying to show that human thought is somehow a language of thought, as opposed to embracing a broader embodied, connectionist model of cognition.

There is no doubt that our systems of computation pale in comparison to the human brain presiding over the biochemical signaling of the body. But like the Ship of Theseus, what makes us us is not the planks, which are cleverly being pulled apart, explored, and replaced with electromechanical substitutes. There is something in crisis, but it's not the field of AI, it's the classical notions of computationalism that presume that thought is a language and that physical symbol systems are enough. They simply are not, and up and coming thinkers will posit better philosophical explanations, one's that will enable the continued advance of AI development.

J D
  • 19,541
  • 3
  • 18
  • 83
  • I believe you would enjoy this article: https://www.wired.com/story/defeated-chess-champ-garry-kasparov-made-peace-ai/ His views on human-aI cooperation to achieve goals that neither could accomplish on their own is interesting. –  Apr 11 '23 at 17:49
  • 2
    @StevanV.Saban "We thought we were unbeatable, at chess, Go, shogi. All these games, they have been gradually pushed to the side [by increasingly powerful AI programs]. But it doesn't mean that life is over. We have to find out how we can turn it to our advantage." This is the just the tale of [John Henry](https://en.wikipedia.org/wiki/John_Henry_(folklore)) adapted for the modern audience. :D – J D Apr 11 '23 at 19:12
  • Yes. The whole argument boils down to natural vs artificial. In the case of John Henry, it's a natural machine vs an artificial machine. Now it's a natural intelligence vs an artificial intelligence. It's interesting to note that most artificial machines look and behave nothing like natural machines and the best artificial machines work cooperatively with natural machines. –  Apr 11 '23 at 19:20
  • A vending machine is a good example of an autonomous artificial intelligence. Once filled, it can dispense products, accept payment and offer change just like a store clerk. Store clerks haven't been replaced, it's just that now the functionality is available where store clerks never existed to begin with like truck stop restrooms. –  Apr 11 '23 at 20:01
  • 1
    @StevanV.Saban I'd argue it depends on the intentions of the people who control the economy who inevitably tend to use such economic power to give themselves more economic power. Thus, the displacement is a social phenomenon more than a technical one. Take trucks that drive themselves. 3 million truckers might lose their job... or 3 million truckers might now have a job of stewarding 3 million autonomous trucks. The decision will be one society makes. – J D Apr 11 '23 at 20:11
  • 1
    I agree but it will also force a reexamination of the full role of a truck driver and evaluate the level of intelligence needed for all tasks. There will always be an intelligence limiting step that will define the maximum intelligence required and a cost for that intelligence. For example, human drivers are better at discouraging car-jacking or cargo theft than an automated driving bot and there is an intelligence cost associated with that task. –  Apr 12 '23 at 00:08
  • @StevanV.Saban throughout my life I have always thought about some jobs: "Why would anyone want to do that? Why wouldn't they find something better?" Well, they don't and they can't, but it beats having nothing. I often wonder if we can somehow create better jobs for people and retire the bad jobs. Will AI help with that? – Scott Rowe Apr 13 '23 at 01:44
  • 1
    @ScottRowe That'll depend on the people who control the AI. Those in control have a vested interest in maintaining control. National healthcare is superior to US healthcare by almost every measure EXCEPT for the wealthy whose wealth provides them the best of everything. That it comes at the cost of the indigent is often not their concern. – J D Apr 13 '23 at 01:47
  • I'm betting that it will escape our control pretty quickly. – Scott Rowe Apr 13 '23 at 01:50
  • 1
    @ScottRowe There are two definitions of a better job: One that provides a comfortable lifestyle for the employee and one that makes more money for the employer. The "best" jobs do both. I believe that is how the power of AI can be best used. –  Apr 13 '23 at 01:51
  • @ScottRowe To continue the truck analogy, the best use of automated driving bots currently, is to allow the truck to continue its journey while the driver is sleeping rather than pulling over for sleep at a rest stop.. This gives the benefit of a less stressful job (not having to drive while tired) for the driver and an increase in efficiency and productivity for the employer. –  Apr 13 '23 at 01:59
  • @ScottRowe There will now be two ways to hack into an intelligent truck driver: For an artificial intelligence, a software/hardware hack will be required. For a natural intelligence, monetary cash bribes will be required. –  Apr 13 '23 at 02:31