13

I know we refer to computers as using logic, logic gates and the like, but is this just us ascribing human capacities to the machines? It sounds like a case of us giving more meaning to the machines than they deserve. I've read about things like derived intentionality and this seems to echo that.

J D
  • 19,541
  • 3
  • 18
  • 83
adkane
  • 275
  • 2
  • 5
  • 1
    **Comments have been [moved to chat](https://chat.stackexchange.com/rooms/146084/discussion-on-question-by-adkane-do-computers-use-logic); please do not continue the discussion here.** Before posting a comment below this one, please review the [purposes of comments](/help/privileges/comment). Comments that do not request clarification or suggest improvements usually belong as an [answer](/help/how-to-answer), on [meta], or in [chat]. Comments continuing discussion may be removed. – Philip Klöcking May 17 '23 at 19:11

12 Answers12

33

Allow me to be precise about this. Logic (in the formal sense) is a system of manipulating symbols according to rules. Computers can manipulate symbols according to rules — that is more or less exactly what they are designed to do — so in that sense computers can use logic.

That being said, computers cannot (currently) reason. Reason entails more than the application of logic: e.g. valuation, goal setting, reflexive analysis, reversibility, error awareness, etc. A computer can manipulate symbols to get from A to B easily enough, but it cannot (currently) distinguish A or B from each other or any other proposition, cannot evaluate the importance or value of these proposition, cannot decide to do that manipulation on its own... You get the picture.

There are computers that are programmed to do mathematical proofs, and they occasionally create proofs that humans have been unable to solve. But they don't do it with intelligence; they do it with brute force. As a rule mathematicians dislike computer-generated proofs; not out of some prejudice, but because CG proofs are ugly and inelegant, the kind of disorderly mess we get when one simply hammers one's way through a problem. When (and if) computers are capable of choosing beautiful, elegant, and efficient proofs over ugly, inelegant, sprawling proofs, then we might start talking about computers using reason.

Ted Wrigley
  • 17,769
  • 2
  • 20
  • 51
  • 13
    It's not merely aesthetics and beauty standards that make mathematicians dislike these proofs. The problem is rather that they don't "understand" the proof. Like with the more traditional logical proofs you've got to find a pattern that lets you untangle the mess, with computer generated proofs you essentially just get the confirmation that it is like that. There is no pattern that you've found, no idea or shorthand. It's axiomatic and if you want to explain it to someone else you'd have to run the program in front of them and verify the machine isn't corrupted. It doesn't feel like knowledge. – haxor789 May 15 '23 at 13:54
  • 5
    Minor nitpick: *"proofs that humans have been unable to solve"* doesn't really make sense. A proof is not something that can be solved. A problem could be solved by a proof. Or a theorem could be proved. But a proof cannot be solved. –  May 15 '23 at 19:45
  • 10
    Also, the core reason most mathematicians (that I know) don't like computer-generated proofs: Humans cannot make sense of them. We might be able to verify that they are correct, but they don't give us any insight into the nature of the problem or a proof. The provide the logic, but not the understanding. –  May 15 '23 at 19:47
  • 1
    "valuation, goal setting, reflexive analysis, reversibility, error awareness" - whether a computer can do any or all of these things depend very much on how you define each of them. A computer / AI can certainly e.g. have objective functions to evaluate things with or explore spaces and pick goals. And sure, humans program roughly how a computer goes about this, but one could similarly argue that a human's programming is its biology. It's oversimplifying the topic far too much to simply say computers can't do these things. And not all computer-assisted proof are brute force. – NotThatGuy May 16 '23 at 11:26
  • 3
    "That being said, computers cannot (currently) reason." I'm not sure if this still holds true in 2023. There's AI's that can be given a picture, let's say a helium filled balloon on a string, and asked "what would happen if I cut the string?", after which the AI answers "The balloon flies away". I'd argue that this is some form of reasoning. (Example taken from GPT-4) – Opifex May 16 '23 at 12:10
  • @NotThatGuy: That's reductionistic. Even if human cognition is purely mechanical — which is far beyond what science can currently demonstrate, incidentally — There is clearly something different going on. Humans don't need to be explicitly programmed to do *anything*; they pop out of the womb and in short order set about trying to explore and understand their world. Computers come out of the factory and don't do anything except what they are told. – Ted Wrigley May 16 '23 at 12:53
  • 6
    @Opifex: People fail to understand the significance of the inquiry/response distinction, which I think is attributable to 1940's anti-metaphisicalism (Russell, Skinner, Turing, etc...). We can train a computer (or a horse, a dog, or a rat, or even a student) to give an appropriate response to a given stimulus. For a computer that's a question of setting up proper programming, for the others it's a question of knowing what programming they already have and setting up proper punishments/rewards accordingly. – Ted Wrigley May 16 '23 at 13:04
  • 2
    @TedWrigley "Humans don't need to be explicitly programmed to do anything" - uhh... humans are the result of billions of years of "programming" in the form of evolution. And even then, much of what we learnt is just the ability to learn: it takes quite a few years of near-constant learning before a baby is capable of doing much thinking (never mind how much knowledge is imparted by other humans - knowledge that was slowly gathered over many generations). By comparison, the programming of computers barely begin to scratch the surface of the "programming" of humans. – NotThatGuy May 16 '23 at 13:04
  • 3
    @Opifex: But what every teacher looks for is that moment when a student (or a dog, or maybe even a horse) seems to *ask the right questions* and seek out pertinent understanding independently. AIs are extremely sophisticated response machines, but I haven't seen any indication that (current) AIs question their world in any meaningful way. They just wait for a stimulus, and give a response. Maybe they will, which will be fascinating and scary (Because they will quickly develop some very *hard* questions about us humans), but... – Ted Wrigley May 16 '23 at 13:11
  • @NotThatGuy: It's a good practice to wait 10 minutes or so before responding to a comment. That gives the commenter time to finish the thought, and it keeps conversations cool and level. I'll come back to your comment later, maybe – Ted Wrigley May 16 '23 at 13:13
  • @TedWrigley I only responded after 11 minutes. You posted another comment to someone else in the mean time, and if I have to wait until 10 minutes after the last comment to anyone, that could be a lot of waiting. Also, if you're posting multiple comments, it's probably a good idea to write your entire response up front, and then copy these into multiple comments which you can post in short succession, rather than sending someone a notification, but then expecting them to wait for 10 minutes before responding (but of course it's different if you realise there's more to say after posting). – NotThatGuy May 16 '23 at 13:37
  • 1
    @NotThatGuy: You're right, I got confused. Apologies. But I have to get to work so I'll come back to this a bit later – Ted Wrigley May 16 '23 at 13:45
  • 1
    @TedWrigley Humans arguably "question their world" as a result of evolution, since it increases our chances of a survival. There's [an entire field of AI](https://en.wikipedia.org/wiki/Genetic_algorithm) which is dedicated to reproducing this evolutionary process of iteratively improving AIs' performance in a task by killing off those bad at it. Also, much of AI development involves AIs "exploring": trying different things to find what works. AIs don't really ask questions because this arguably doesn't align well with how we program them, how we interact with them, what their purpose is, etc. – NotThatGuy May 16 '23 at 13:59
  • I upvoted because there's a lot of solid here. "computers cannot (currently) reason. Reason entails more than the application of logic: e.g. valuation, goal setting, reflexive analysis, reversibility, error awareness, etc." Computers are fully capable of valuation, goal setting, reflexive analysis, error awareness, etc. One form of error awareness is called exception monitor and handling, and any software of import uses this technique. Goal setting? There are classes of algorithms in AI that set and work towards goals... – J D May 16 '23 at 16:55
  • Machine learning uses weighted connections to provide values as in [rule-based machine learning](https://en.wikipedia.org/wiki/Rule-based_machine_learning) where artificial neural networks might be used to decide which rules to use in real-time by monitoring a data stream. Systems are also being developed to make better questions as in [prompt engineering](https://en.wikipedia.org/wiki/Prompt_engineering). Food for thought. – J D May 16 '23 at 16:56
  • This answer is factually incorrect, since computers, which by definition are universal machines, can reason by simulating, for example, the human brain. To reason is about software, not hardware. There are also logical research tools, like [automated theorem provers](https://en.wikipedia.org/wiki/Automated_theorem_proving), which certainly show that computers can reason in limited ways already, by using today's commonly available software. Computers do not require any consciousness to do so. – xamid May 16 '23 at 20:22
  • Just as a general, undirected comment... I am disheartened by the number of intelligent, science-oriented people who assert speculation as fact in this topic area. I understand that there is a general ***belief*** among some that the human brain is a typical classical-dynamics type machine, and thus **in principle** the same as a standard binary, step-wise computational device (if differing in scale). But that is an untested hypothesis with little or no evidence to back it up. It's an interesting theory, but by no means a solid established fact, so please don't present it as such. – Ted Wrigley May 16 '23 at 21:47
  • 1
    @TedWrigley I am disheartened by the number of "intelligent, science-oriented people who assert speculation as fact in this topic area" by stating things like "computers cannot (currently) reason", or that there is [a mind-brain separation](https://en.wikipedia.org/wiki/Mind%E2%80%93body_dualism), even though we have no evidence of that (especially if one doesn't believe in [other] supernatural things). The computational brain hypothesis is fairly well-supported by psychology and neuroscience; it would be the default assumption until we proof a non-deterministic non-random form of matter. – NotThatGuy May 17 '23 at 08:21
  • 1
    @NotThatGuy: You can have whatever 'default assumption' you like; I'm not here to question deeply held beliefs. But we cannot rest a supposedly scientific conclusion on some tenet of belief. It is an **observable fact* that 'computers cannot (currently) reason' as humans do. Maybe 'in principle' they can, maybe not; That's for research to determine. But we cannot leap from 'in principle' to 'in fact'. – Ted Wrigley May 17 '23 at 14:22
  • @NotThatGuy: And please note that you gave the game away a bit. I did not mention mind-body separation or any 'other' supernatural thing; that's a function of your belief system: something that you are fundamentally opposed to. If you pressed me to speculate, I'd suggest that human cognition is closer to quantum computing than digital processing — i.e., a system of superimposed states that only resolve when activated — and quantum computers are **not** Turing machines. But I do not use that speculation as the basis for ;'scientific 'conclusions because that is not fact. – Ted Wrigley May 17 '23 at 14:35
  • 2
    If posed as a true or false proposition, then I would evaluate this statement as false: "Computers manipulate symbols according to rules." Why false? Symbols are abstract signs which acquire meaning in the context of human communication. Computers do not manipulate symbols. We build a physical device and we interpret states of the device as symbols. The Atmega328 processor architecture is represented in this diagram as a symbolic map https://www.watelectronics.com/wp-content/uploads/Arduino-Architecture.jpg but the device does not manipulate symbols it has physical states and changes states. – SystemTheory May 17 '23 at 15:31
  • @TedWrigley An alternative name for "default assumption" is "null hypothesis": you know, that thing that practically every scientific field recognises and makes extensive use of. If you're going to dismiss an entire extensively-reasoned epistemological framework as nothing more than "deeply held beliefs", then you clearly have no interest whatsoever in actually understanding the position of anyone else. – NotThatGuy May 17 '23 at 15:53
  • @TedWrigley Did you forget what you wrote in your own comment or something? You explicitly referred to the belief that "the human brain is a typical classical-dynamics type machine", and your entire answer is based on a rejection of this. That's also used to argue against a mind-brain separation, so for you to say you "did not mention" that separation is just laughable. And we have no indication that quantum mechanics falls outside of "non-deterministic non-random form of matter" - some believe it's random, some believe it's deterministic, so that's compatible with what I said – NotThatGuy May 17 '23 at 15:53
  • @SystemTheory: The on/off states of a single computer bit are associate with the numerical **symbols** 1/0 by design. In fact, early computers were little more than automatic calculators, meant to do symbolic manipulations of numbers that humans could feasibly do, but much quicker. I don't think a computer understands numerical symbols any more than a calculator does (or anymore than a hammer understands a nail) but human beings understand, designed these tools to serve symbolic purposes. – Ted Wrigley May 17 '23 at 19:24
  • @NotThatGuy: 1. That is **not** what a null hypothesis is. If you need a primer in the philosophy of science, I have other posts that explain the null that I can point you to. – Ted Wrigley May 17 '23 at 19:26
  • @NotThatGuy: 2. I explicitly suggested that ***many people*** hold the belief that "the human brain is a typical classical-dynamics type machine", which is true. It was a subtle way of pointing out that you yourself are working from that belief system, in order to avoid giving you any offense. But subtlety does not seem to be your strong suit. – Ted Wrigley May 17 '23 at 19:28
  • @NotThatGuy: 3. Again, I am not hear to criticize your beliefs, but I want you to stop acting as though your beliefs are unquestionably true (e.g., the heated, defensive tone you adopt here and in other threads, whenever I suggest your beliefs are not scientific canon). I get enough of that from religious fanatics, thankyouverymuch. – Ted Wrigley May 17 '23 at 19:32
  • @NotThatGuy: TL/DR: Drop it. You have a full head of steam but no traction to speak of, so this is not going to get you anywhere. – Ted Wrigley May 17 '23 at 19:33
  • @TedWrigley TL;DR: 1. Condescendingly pointing out how (you think) you know so much more than me. 2. "I was trying to not offend you, but also here's an insult". 3. Ignore the justifications I give for my beliefs, and then accuse me of unquestioningly holding to unjustified beliefs, lol. TL;DR: Some more insults and condescension. Also, I like how you managed to post 4 comments, yet you've addressed pretty much nothing I've written. Very constructive, thanks for your contribution. – NotThatGuy May 17 '23 at 20:07
  • @NotThatGuy: Glad you got my message. – Ted Wrigley May 17 '23 at 20:41
  • 1
    I studied electrical engineering taking courses on analog systems, digital systems, computer architecture, and I programmed computers using assembly language and C. I can tell you that in a typical digital computer the symbolic values On (1) or Off (0) are set by voltage bands and a gap between the two bands to try to eliminate uncertainty of the gate state and analog noise. Hardware engineers design the computer so that a programmer can map symbols onto the physical states which are the low voltage range and the high voltage range. If logic is manipulation of symbols computers don't do that. – SystemTheory May 17 '23 at 21:04
  • @SystemTheory You may know plenty about computers, and so do I, but you should try learning a bit more about neuroscience, because you can make roughly the same argument there, to conclude that human brains "don't do [logic]". You've got yourself a [fallacy of composition](https://en.wikipedia.org/wiki/Fallacy_of_composition) there, in trying to conclude that the whole of the computer can't do something because parts of it can't do that. You're also [special pleading](https://en.wikipedia.org/wiki/Special_pleading) when it comes to the brain. – NotThatGuy May 17 '23 at 21:18
  • @NotThatGuy - Funny just before reading your comment I thought neurons only have physical states and electrochemical connections to other neurons. A light switch has two states that we call On or Off depending on the state of the light bulb. But neither the switch nor the light bulb are "manipulating symbols according to rules". Neither is a digital computer manipulating symbols according to rules. Neither is a neural network manipulating symbols according to rules. The neural network is also called a model free estimator that somehow makes us conscious of symbols, rules, and manipulations. – SystemTheory May 17 '23 at 23:53
  • @SystemTheory I'm not really getting the point you're trying to make, but computers can emulate neural networks (that's what most of modern AI does), so that would throw a spanner into an argument that brains and computers have different capabilities based on their parts. – NotThatGuy May 18 '23 at 10:45
  • @NotThatGuy the "neural network" that AI research uses is an abstract mathematical construction. It is inspired by the basic surface-level principal of a brain, but is not attempting to actually emulate one. – OrangeDog May 18 '23 at 13:29
  • 1
    @SystemTheory a digital computer at the hardware level is 100% manipulating symbols according to rules. Thus anything that runs on one (such as an artificial neural network) also works by manipulating symbols according to rules. Unless you have a non-standard definition of "symbol" and/or "rules"? – OrangeDog May 18 '23 at 13:32
  • @OrangeDog: I think we need to make the distinction that *humans* design and use computers as tools to manipulate symbols according to rules. There's no real reason to believe that a computer (or even an AI at this point) grasps the concept of a symbol or a rule. To use an analogy I used before, a hammer is great at driving nails, but doesn't understand what a nail is, much less the purpose and use of a nail. Humans understand the hammers and nails on a symbolic level; hammers don't. – Ted Wrigley May 18 '23 at 14:24
  • @TedWrigley "Understand" (and "symbolic level") is doing most of the heavy lifting in your comment there. We intuitively understand what it means to understand something, but I'm yet to see a concrete definition that necessarily excludes computers. If a hypothetical computer can behave in the exact same way a human does, using nothing more than capacitors and wires and such (we can argue to which degree computers can currently do this, but let's just leave it as hypothetical for now), you're going to have a hard time coming up with a definition that differentiates the two. – NotThatGuy May 18 '23 at 14:55
  • @TedWrigley a machine doesn't have to understand the concepts of what it's doing in order to do it. A toaster doesn't understand what bread it, yet it still toasts it. It has no concept of the number six, yet when you set it to six it still successfully burns it. Nor indeed does a human have to understand these concepts in order to operate it. – OrangeDog May 18 '23 at 14:56
  • @NotThatGuy: Hypothetically speaking, a begonia can 'understand' in exactly the same way a human can, as can a rock, or a hammer, or an entire planet. If someone wants to believe that is true, nothing I say can stop them. But there's no *reason* to believe that's true, and there's no *reason* to believe it's true of a computer. Humans experience consciousness, which gives us a reason to believe we understand things (which maya turn out to false, granted). Why would you assume the same of computers? – Ted Wrigley May 18 '23 at 15:46
  • 1
    @OrangeDog: Addition and subtraction can be done on an abacus. Same with a pile of stones. But it's humans who set the rules that make that work, and humans who understand and use the results. – Ted Wrigley May 18 '23 at 15:52
  • @TedWrigley well yes, you keep giving examples of devices that don't manipulate something following a set of rules. Instead of an abacus, consider a calculator. – OrangeDog May 18 '23 at 16:04
  • @TedWrigley You're the one who keeps asserting that the human mind is special, that something exists within the brain that we've never detected and have no evidence of. That there's more than what we have detected. And you also keep trying to shift the burden of proof to me to disprove this. That's much like asserting [there's a teapot floating in space](https://en.wikipedia.org/wiki/Russell%27s_teapot) because no-one has disproven it. Never mind that you ignore all evidence and justification against your claim (such as neuroscience and the incredible things computers/AI can do). – NotThatGuy May 18 '23 at 16:42
  • @TedWrigley I suspect you'll keep saying "computers can't reason" even when we have fully functioning androids walking among us, that can behave exactly like humans, because you seem to have an unfalsifiable claim there, that relies on an overly vague definition of "reason". – NotThatGuy May 18 '23 at 16:42
  • 1
    @OrangeDog: The point is that the system of symbols and rules are designed by humans, for human purposes. The only difference between a hammer and a computer is that the computer is a more intricate and sophisticated tool. Without human intervention and direction neither would follow any rule system at all. – Ted Wrigley May 18 '23 at 16:52
  • @NotThatGuy: When we have androids walking among us (if not before) then we will have evidence: something that goes beyond mere speculation and belief. I'm not averse to the idea that computers might develop human-like capacities in the future. I'm aware that they have not yet done so, and I'm not credulous enough to insist that something **is** merely because it **might be**. – Ted Wrigley May 18 '23 at 16:57
  • @OrangeDog - A single switch, such as a light switch, has two distinct physical states, but the switch does not recognize the human meaning of the words "switch", "symbol", "rules", or "physical states". Then how can a switch change physical states to manipulate symbols according to rules? These are all human concepts. A bit in a digital state machine is a single switch that has two physical states. The human engineer or programmer must specify how the machine changes physical states in the context of performing computing tasks. Each neuron in a network is a switch with activation potentials. – SystemTheory May 18 '23 at 17:19
  • @TedWrigley What are these "human-like capacities" you speak of, that you seem to think computers have never ever demonstrated before? 50, or even 20, years ago, if someone hypothesised about a computer being able to do what ChatGPT could do, many people would say that would be the evidence we need. All you're doing is pushing it beyond what we're currently capable of, with no justification beyond "we're not there yet". And I mean, yeah, sure, advancements we haven't made yet... haven't been made yet, but that's just a tautology. – NotThatGuy May 18 '23 at 17:28
  • @SystemTheory a single switch doesn't. Nor does a hammer. Nor does an abacus. However, a computer does. – OrangeDog May 18 '23 at 21:36
  • @TedWrigley no, the difference between a hammer and a computer is that a computer manipulates symbols according to a set of rules and a hammer doesn't. Who designed the rules is irrelevant. – OrangeDog May 18 '23 at 21:37
  • @OrangeDog - I think the word "symbol" is evoking a semantic argument. 1 - The computer is a state machine which transitions from one state to another when running code. 2 - States in the state machine have no inherent meaning. Source code maps meaning to the states and state transitions. 3 - When the state machine changes states it is not manipulating symbols it is implementing the machine code. 4 - We encode and decode the meaning of the states to implement rapid automated logic. 5. - Reverse engineering of machine code attempts to map symbolic meaning to machine states with no source code. – SystemTheory May 18 '23 at 23:04
  • @SystemTheory How do you know that a brain is anything more than "a state machine which transitions from one state to another"? It's fairly trivial to argue that computers aren't like brains (or aren't capable of what brains are capable of) when you only consider computers, but you should consider both of the two things you're comparing. – NotThatGuy May 19 '23 at 00:57
  • @NotThatGuy - We recognize the meaning of symbols, rules, logic, states, and state transitions as concepts in our mind. But we do not know how neurons, operating as state machines, somehow generate the recognition of concepts in our mind. See article https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218/ wherein AI expert Geoffrey Hinton jokes about his knowledge of brains: “So, about a year ago, I came home to dinner, and I said, ‘I think I finally figured out how the brain works,’ and my 15-year-old daughter said, ‘Oh, Daddy, not again.’” – SystemTheory May 19 '23 at 01:29
  • @SystemTheory So you're purely [arguing from ignorance](https://en.wikipedia.org/wiki/Argument_from_ignorance) then? You can't say that computers aren't like brains based on the fact that you/we don't fully understand how brains work. – NotThatGuy May 19 '23 at 02:39
  • @OrangeDog: Don't be ridiculous. If I created a system in which driven nails represented 1s and popped up nails represented 0s, I could build a perfectly functional computer out of hammers. It would be slow, and loud, and dangerous to fingernails everywhere, but it would work. The distinction you're trying to make doesn't exist. – Ted Wrigley May 19 '23 at 03:38
  • @TedWrigley no you couldn't. Hammers don't do anything by themselves. Hammers don't act according to rules (other than physics). In your system it is a human that's manipulating symbols according to rules, not the hammers. You could make a computer using hammers as a component, but that doesn't make a hammer a computer. – OrangeDog May 19 '23 at 08:05
  • @SystemTheory the state of a computer is the current set of symbols. Moving from one state to another is manipulation of those symbols. At the physical level, a symbol is a pattern of voltage in particular circuits. They're manipulated by powering the circuitry around them in particular ways. As we all use von-Neumann architecture, the rules are also encoded as symbols, which may be the source of confusion. – OrangeDog May 19 '23 at 08:09
  • Before trying to look at the philosophy of computing, one should probably first learn what computing mathematically is and how computers physically work. – OrangeDog May 19 '23 at 08:11
  • @OrangeDog: See the wikipedia article on [mechanical computers](https://en.wikipedia.org/wiki/Mechanical_computer). They are a real thing, no different in principle than electronic computers, limited only by the implications of using moving physical parts. So yes, I ***could*** build a computer from hammers. – Ted Wrigley May 19 '23 at 15:11
  • @OrangeDog: You seem to be stuck on the idea that electronic computers are manipulating symbols, when in fact all a computer does is sequentially change states (whether physical or electronic). These states are not symbols to the computer (at least not yet); they are only symbols to the people who design and use the computer – Ted Wrigley May 19 '23 at 15:15
  • @TedWrigley I said you could. The computer you've built will then manipulate symbols (your up/down nails) according to rules. A single hammer by itself does not. A symbol is a symbol whether something knows it is or not. – OrangeDog May 19 '23 at 16:01
  • @OrangeDog - Long ago I wrote code using the 8051 uC Assembler for my senior project in electrical engineering. I used a logic analyzer to debug the software. Recently I wrote code using AVR-GCC library tools and AVRDude to develop hardware and software projects for 8-bit AVR. If I write C code, then compile the program, the compiler creates binary machine code. AVRDude plus a programmer interface writes binary code to flash memory. If I analyze raw machine code, with no knowledge of how it was written, compiled, or the target uC it might be impossible to reverse engineer its symbolic meaning. – SystemTheory May 19 '23 at 17:56
  • @NotThatGuy - Wikipedia - *In philosophy of mind, qualia (/ˈkwɑːliə/ or /ˈkweɪliə/; singular form: quale) are defined as instances of subjective, conscious experience.* Based on my conscious perceptions and concepts (qualia) developed in the context of self-other communication I know that the output of each neuron in a network is like a binary switch in a state machine. I do not know how distinctions (pattern recognition) or qualia emerge from building and training neural networks. I often wonder who is more ignorant or wicked (self-deceived) when humans disagree in the context of disputes! – SystemTheory May 19 '23 at 18:13
  • @SystemTheory Do you think that a definition counts as evidence for the thing it's defining? Because either you think that, which is patently absurd, or you've missed every time I've mentioned how your conclusion is based on you not understanding how the brain works. And no, it's not a question of "who is more ignorant" - appealing to ignorance is a logical fallacy, which appears in your argument. Your argument would be fallacious regardless of how ignorant either of us are. It's unfortunate that you seem to have taken my rebuttal of your argument as a personal attack, and responded in kind. – NotThatGuy May 19 '23 at 18:33
  • @NotThatGuy - When I was young my older brother would often say, "Who died and made you the authority?" I think what I think independent of what you think; and you apparently do the same. I attribute that to the mystery of our brains. I am done having semantic arguments with you. – SystemTheory May 19 '23 at 19:43
  • @SystemTheory You're the one who haven't defined any of your terms, who built an argument on a vague conceptualisation of how humans think, without being able to expand on or justify that. I mean, I guess you can think it's a "semantic argument" to point this out and to try to get you to make a more concrete argument. (You attribute us thinking differently to "the mystery of our brains"? Have you ever even written a line of code before? Computers can do fundamentally different things from one another based on their programming and input data - that's not particularly mysterious) – NotThatGuy May 19 '23 at 19:57
  • @NotThatGuy - Have you read my recent comment to OrangeDog above where I describe my experience writing Assembly and C code and my knowledge of how a digital computer (state machine) runs code? Do you know that my brain is somehow encoding meaning into these symbolic words in this sentence according to my subjective interpretation and that your brain is somehow decoding these symbols according to your interpretation? What level of definition is necessary and sufficient to describe the mysterious functions of neural networks or brains in your interpretation? – SystemTheory May 19 '23 at 20:14
  • 1
    @SystemTheory I have read your earlier comments about you having written code, but the way you talk about brains makes it sound like you know nothing about coding, because much of the same ideas apply. From my perspective, you're taking some input, putting it through a black box, and generating some output. I can accept that you are, in some vague sense, "encoding meaning" and there are symbols involved, but if you're going to assert that this somehow excludes computers, then I can't accept that until you formally define what those terms mean and/or clearly justify why computers are excluded. – NotThatGuy May 19 '23 at 21:07
  • @NotThatGuy I am being flagged to move extended comments to chat. I say the same ideas do not apply to the physical substrate (computer running operation codes as an explicit state machine; neural network mapping inputs to outputs as a complex black box implicit state machine) and to the emergent properties of the artificial or organic mind. The cognitive properties of mind are called emergent because they are not explained by the reduction model incorporating the computer operation codes or the mathematical structures and algorithms used to build artificial neural networks. I hope you get it. – SystemTheory May 20 '23 at 01:12
7

You might like this answer: Why is a measured true value “TRUE”?

Physicist Richard Feynman makes the case in this lecture that computers are essentially limited to sorting: Hardware Software & Heurustics (from 1986).

logike, "branch of philosophy that treats of forms of thinking; the science of distinction of true from false reasoning," from Old French logique (13c.), from Latin (ars) logica "logic," from Greek (he) logike (techne) "(the) reasoning (art)," from fem. of logikos "pertaining to speaking or reasoning" (also "of or pertaining to speech"), from logos "reason, idea, word"

From Etymonline

Do computers use 'forms of thought'? You can split hairs semantically if you like. But I suggest they are able to make inferences by applying rules in exactly the same way humans do using formal logic.

The Chinese Room Argument applies, and the syntax semantics distinction. But I'd say 'form of thought' is syntax.

CriglCragl
  • 19,444
  • 4
  • 23
  • 65
  • But computers make inferences without understanding so that looks to be something that distinguishes the way in which we infer. Hair splitting is where the fun is to be had in philosophy, and this split looks like the essence of many philosophy of mind problems. – adkane May 15 '23 at 11:50
  • 3
    @adkane: A lot of humans make inferences without understanding too. Mauro's point about an engine using the rules of thermodynamics without understanding them covers it I think. 'Thinking' is a slippery word, I'd look to family resemblances & modes of life to give it meaning, rather than try to find the word's essence & an exhaustive exacting definition that must diverge from use of the word anyway. – CriglCragl May 15 '23 at 12:03
  • You are conflating analogy with description. – David Gudeman May 15 '23 at 15:02
  • My brain cells make inferences without understanding anything. – gnasher729 May 19 '23 at 16:12
5

Computers use logic, but differently than people and other Great Apes.

Logicians make a distinction between formal logic and informal logic. Computers are fully capable of doing formal logic, but they struggle with informal logic because while formal logic is a syntactic system which is easily done by substituting symbols for other symbols, informal logic relies heavily on domain-specific semantics, including what some call the material logic of natural language. In the latter category, it's possible to build sophisticated software to replicate informal logic, but when done, it requires a lot of work from people to help codify extra rules, values, and language to emulate informal logic that human genes do with ease for human beings. Expert systems are a special sort of AI that weighs through facts and draws inferences, and such systems ultimately rely on transistors and machine code. Of course, the human brain ultimately relies on neurons and epigenetic components of the nervous system, so human beings also have a physical basis.

Now, 50 years ago, there was a clear line between human reasoning and machine reasoning because the rules that were implemented had to ultimately be crafted by human programmers. However, today, the game has changed somewhat with connectionist models asserting some influence in systems. For instance, thanks to the ever-widening collection of machine learning (ML) techniques, there are now rule-based machine learning models:

While rule-based machine learning is conceptually a type of rule-based system, it is distinct from traditional rule-based systems, which are often hand-crafted, and other rule-based decision makers. This is because rule-based machine learning applies some form of learning algorithm to automatically identify useful rules, rather than a human needing to apply prior domain knowledge to manually construct rules and curate a rule set.

What does this mean? It means machines are learning in the same way people do, that is through induction by repeated observations using information from the outside world. A computer vision system which learns faces does manifest intentionality in the same way a dog, chimp, or child manifests intentionality.

So, machines, particularly with systems such as cameras, sensors, and other robotic peripherals can learn from the environment, devise their own rules, and then use those rules to make decisions. That's the essence of how and why human's use logic. While we get our physical endowment from genes which have evolved, our machines are still largely designed, though there is also an increasing repertory of computational strategies to evolve design such as the use of genetic algorithms and ML strategies that mimic natural processes. While clearly the how's and why's are different from human beings and our use of logic, there is no disputing that sophisticated robotic systems that are designed to mimic human reasoning can and do use logic. "Using logic" is not a yes-no question, but one of degree, and there's little sign that computers and robots won't continue to use logic more and more like people as the systems become more sophisticated and research continues forward into understanding how the human brain and its neurons compute to provide human-level intelligence.

J D
  • 19,541
  • 3
  • 18
  • 83
  • 1
    Nice, between all the bogus answers there is this one which is mostly correct. But "using logic" is indeed a yes-no-question when one is able to ask "Does X use logic?", which already declares it a boolean property, as by the question. Sure, there are different degrees and kinds of how logic can be used, but that was not asked about. A computer already uses logic because it applies it through logical gates. The question asked is that simple to answer. But the extension to higher reasoning is still a valuable addition. Though, computers being universal machines already implies this capability. – xamid May 16 '23 at 20:39
  • "Object-level vs meta-level" would be a more suitable and correct dichotomy than "formal vs informal". As it happens they correlate; but that's incidental. – Rushi May 17 '23 at 10:22
  • 1
    @Rusi Interesting assertion. Are you saying that humans are capable of meta-logic in a way computers are not? – J D May 17 '23 at 15:48
  • @xamid 'But "using logic" is indeed a yes-no-question". Oh, I agree 100%. It's a rhetorical strategy to use is instead of ought to insist on an ought, and an ought instead of an is when politely questioning the is. – J D May 17 '23 at 15:49
  • @JD Yes. Reasoning = meta. Boolean (aka digital) logic = object. The 2 levels may *seem* alike but are 2 distinct levels – Rushi May 17 '23 at 16:00
5

From Wiktionary:

Noun
logic (countable and uncountable, plural logics)

  1. (uncountable) A method of human thought that involves thinking in a linear, step-by-step manner about how a problem can be solved. Logic is the basis of many principles including the scientific method.

  2. (philosophy, logic) The study of the principles and criteria of valid inference and demonstration.

   2001, Mark Sainsbury, Logical Forms - An Introduction to Philosophical Logic, Second Edition, Blackwell Publishing, page 9:

   "An old tradition has it that there are two branches of logic: deductive logic and inductive logic. More recently, the differences between these disciplines have become so marked that most people nowadays use "logic" to mean deductive logic, reserving terms like "confirmation theory" for at least some of what used to be called inductive logic. I shall follow the more recent practice, and shall construe "philosophy of logic" as "philosophy of deductive logic".

  1. (uncountable, mathematics) The mathematical study of relationships between rigorously defined concepts and of mathematical proof of statements.

  2. (countable, mathematics) A formal or informal language together with a deductive system or a model-theoretic semantics.

  3. (uncountable) Any system of thought, whether rigorous and productive or not, especially one associated with a particular person.

   "It's hard to work out his system of logic."

  1. (uncountable) The part of a system (usually electronic) that performs the boolean logic operations, short for logic gates or logic circuit.

   "Fred is designing the logic for the new controller."

It is senses 1, 2, and 5 in which we use the word "logic" when discussing human behaviour. Specifically, deductive logic and inductive logic, which fall under sense 2, are the kinds that philosophers talk about.

It is senses 3 and 4 in which mathematicians use the word, also called "axiomatic logic" or employment of an "axiomatic system". One such system is that of Boolean algebra. These usages of the word to refer to the application of axioms to derive other true statements within that axiomatic framework naturally arose from the earlier sense in which "logic" refers to deductive reasoning.

It is senses 4 and 6 in which computer scientists use the word, and sense 6 is almost exclusively the sense in which computer hardware / embedded systems engineers and computer programmers use it. Specifically, the hardware of modern computers implements the logic of Boolean algebra. That is the job of so-called "logic gates".

See also: Rewriting and abstract rewriting systems

Jivan Pal
  • 170
  • 4
  • 1
    I like the citation of the taxonomy, but did you answer the question affirmatively or negatively? I can't tell. – J D May 16 '23 at 16:21
  • An understanding of the different senses in which the word "logic" is used, and how the computational senses arose, should resolve OP's question. To directly answer it, though: "Yes, computers use logic, but not the kind you're thinking of." – Jivan Pal May 16 '23 at 16:23
  • So, which definitions do you claim computers cannot do? For instance, "senses 4 and 6 in which computer scientists use the word". It seems to me an AI researcher who is a computer scientist uses every sense of the word when discussing the application of logic to goal-oriented systems. – J D May 16 '23 at 16:23
  • @JD I'm not making any such claims. I'm pointing out that OP is conflating the usage of "logic" in philosophy with its usage in mathematics and computer science. OP specifically asked about e.g. why logic gates are called *logic* gates. If you want to talk about whether we can create computers that can understand/learn abstract concepts and attempt to perform sound deductive reasoning in the philosophical sense, I think that deserves to be its own question, and indeed much research and debate has already been done and had in this area, and continues to be actively done and had. – Jivan Pal May 16 '23 at 16:29
  • 1
    The post said "logic gates and the like". Which might be broader than NANDs as a basis for a ALU/CU/MMU. When taken with the tag applied by the OP, philosophy-of-mind and the invocation of the philosophical term "intentionality", I suspect the OP had a broader scope. As a software engineer, I think it's fair to say that the intersection of logic and computer science is far broader than Boolean logic. Turing Machines and formal languages associated with such automata can be used to build intuitionistic logics, eg. See [CHC](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence). – J D May 16 '23 at 16:39
  • +1 For a positive contribution. – J D May 16 '23 at 16:40
  • @JD Of course the intersection is greater than *just* Boolean logic/algebra, but I believe that is entirely captured (if not only *almost* entirely captured) by sense 4. Constructive mathematics is indeed very cool/interesting, and I'm intimately familiar with the use of the CHC in that regard. It's still not deductive or inductive reasoning in the philosophical sense, which is what it seems to me OP is exclusively thinking of when they hear the word "logic". – Jivan Pal May 16 '23 at 16:50
  • Thanks for responding to a grilling! Welcome, and please continue to contribute. :D – J D May 16 '23 at 17:02
  • 1
    You inspired me to ask a question. Take a stab at it. Thanks! https://philosophy.stackexchange.com/questions/99310/is-rule-based-machine-learning-an-example-of-inductive-logic-in-the-philosophica – J D May 16 '23 at 17:11
3

Computers do not use logic. They do however, use logic GATES. Logic gates perform very small pieces of logic calculations like AND, OR and NOT, but this is not enough to say they use logic.

Computers follow instructions. It is possible to program a computer to perform pretty advanced logic deductions, but most computers don't do this. You could say that these computers use logic, but they are an exception, not the rule.

Stig Hemmer
  • 392
  • 1
  • 5
  • So an expert system doesn't count as using logic? – J D May 16 '23 at 16:50
  • 1
    "but this is not enough to say they use logic" Why? "Computers follow instructions" Yes, and they apply logic (thus use logic) while doing so. Actually, they are much more reliable than any human in using logic. Humans are just more creative. – xamid May 16 '23 at 20:27
  • I think the question hinges on the word 'use'. If something doesn't have agency, it can't be said to *use* anything. Similarly you can't say that computers "follow instructions". – Scott Rowe May 17 '23 at 22:55
3

Difficult. I mean the question is basically where the "mind" or "soul" of the "entity" of a computer is located or whether it has or is any of that to begin with.

Like if you take your device and consider it a blackbox then somewhere inside of "it" something is happening that can be described by logical operations, so "it" "does" apply logical principles somewhere (by choice or accident).

The problem is that when you take away the blackbox and look at the inner workings then there is no agent and none of the components immediately presents itself as having the capability of being an autonomous agent. So there is nothing that could actively "use" something.

And if you think that electricity is some quantum black magic hocus pocus and that there's some hidden nonsense going on... Well you could build mechanical computers, water based computers and it hardly gets more transparent than this plastic based one running the game "nim".

So let's consider this case of Dr. Nim. Like where is the logic happening. Is it the user? The user sets things in motion with their input but they aren't doing any of the computation or apply logic. In fact the user just needs to be able to push a button and flip a switch no deep thought or logic required. Is it the current? In this case a marble? A MARBLE? While it moves the pieces it certainly isn't the marble acting logically. Is it the plastic thing itself? Kinda... sorta... kinda not.

Like it's something that moves and it moves according to logical rules, but actually it only moves by mechanical rules that aren't really "followed" by an agent, but just confirm to the laws of physics. If you'd turn it upside down or employ it in space it wouldn't work. That being said if you'd put a human into a hostile environment they also might seize to work.

To a degree the logical work is done by the programmer/engineer who constructed the gizmo, nevertheless when it's actually used they are no longer present. So for this creation, their "creator god" may as well be dead and it would not interfere with the scenario, so how important can they be? The logic is contained in the device itself not merely in the creator of that device.

Now for the sake of argument you could abstract from the device the algorithm that is at the heart of it and consider that to be the computer, in which case the user would be the one applying logic by following the logical steps while the program itself would not be using logic it would be an implementation of logic. However as we've already covered the user could as well be incapable of logic and is just starting the process while the device performs the heavy lifting. But is it "USING" logic? "IS" "IT", to begin with?

Now so far we could put the blame on the programmer and argue the computer is just a manifestation of an algorithm and thus is usage of logic but doesn't use logic.

However that doesn't quite cut it either anymore, because what about machine learning and self-programming computers? Like we can construct systems that improve their mechanism on the fly by being given a desired output, a capability to produce outputs, a set of parameters and a comparison function. Then an output is produced, it's compared to the desired output, if better suited, keep it if unsuited discard it and try again with different parameters, if close enough stop.

Now the programming part is no longer the part of the programmer. In fact it's possible that the program solves problems that were too complex for the programmer themselves. At the same time it's still the same non-thinking machine. The game might have become more complex but the concept remains the same. It's still following an algorithm, still only an application of logic in that maybe not even logic but probability. That being said when pushed to the extreme it could be chance find logical properties, like if you want it to present you with a decision tree, then it's perfectly possible that what you end up with is a logical diagram. So it's not just an implementation of logic but it creates implementations of logic.

Though given that it's still just passive material, does that count as "using" logic? And if it doesn't does our application of algorithms count as logic? Because in the end our "usage" mostly rest upon our biased perception of ourselves as agents rather than passive material, but if passive material is capable of such feats where is the line between the two?

Also even if there is a line and we are active and computers are certainly not, given that they produce logic and can be understood with logic, is it useful to think of them in terms of entities using logic? Like is our process of creating logical constructs similar enough for that analogy to be useful regardless of whether it holds.

And here it might come back to the programmer/engineer, because the computer way of thinking is largely modeled after our own way of thinking, though much of the programming currently happens at a level that is far removed from the actual processes so while it might have had been useful, we might move into a direction where it no longer isn't or we might coincidentally move in the same direction as our own way of thinking. But unfortunately as of right now we don't seem to know how our own mind and consciousness works to begin with so that a comparison to our most capable tools is always attempted but it's never clear as to whether it actually works like that.

haxor789
  • 4,203
  • 3
  • 25
  • Maybe Logic uses computers? Maybe Logic uses humans, as its ecosystem to expand and take over the universe? Many questions are asked backwards, in terms of humans, and should be asked as it using us. (asked by whom?) – Scott Rowe May 17 '23 at 10:38
3

I think it is appropriate to quote George Boole here:

No general method for the solution of questions in the theory of probabilities can be established which does not explicitly recognise, not only the special numerical bases of the science, but also those universal laws of thought which are the basis of all reasoning, and which, whatever they may be as to their essence, are at least mathematical as to their form.

The way I understand this is that he is proposing there are "universal laws of thought", of which the human mind as well as a computer are physical implementations. This is meant with regard to their ability to work with logically provable statements, not creativity or the ability of abstraction, which classical computers are completely lacking. Any problem a computer should solve has to be "presented and tailored to it" specifically by humans, but that does not take away from their ability to actually solve "logical problems" according to "laws of thought".

J D
  • 19,541
  • 3
  • 18
  • 83
  • +1 Indeed: https://en.wikipedia.org/wiki/Law_of_thought Welcome! – J D May 17 '23 at 14:52
  • 1
    IMO, you can distinguish between formal thinking, which uses a mathematical language and establishes truths by means of proofs built from axioms, using formal logic, and informal or *natural* thinking, which essentially produces fuzzy inferences, that are always interpretable in different ways and neither true nor false. Even expressing Newton's law of gravity isn't formal enough to get an absolute truth value. – Yves Daoust May 17 '23 at 15:26
  • @YvesDaoust I was referring to "creativity and abstraction" to distinguish between humans and computers. However the question was not "how different are humans and computers?" but "do computers use logic?" and the person I quoted invented the mathematical rules for logical calculations on which computers are based. – Thomas Hirsch May 18 '23 at 23:08
2

Computers intensely use formal logic, which is built into them. Formal logic amounts to elementary arithmetic (AKA boolean) with the binary digits 0, 1, and the operators not, and, or.

Combining these operations in various ways, you reconstitute all arithmetic on integers and other numerical representations. All that computers can do is summarized as handling

  • numbers (numerical applications),
  • characters (text processing),
  • programs (textual sequences of instructions to be executed automatically).

Programs do use a more advanced form of logic, closer to what we call first order logic, and programming is a task similar to theorem proving, which is still completely formal. But programming is anyway performed by humans, not by computers.

So, no, computers do not use logic in the sense of a cognitive activity: computers do not think, they have no mind, no consciousness, no nothing. But due to their speed and immense data storage capabilities, they achieve outstanding tasks that a human could never perform, just using zeroes and ones.


The question can be re-examined in the light of the famous Artificial Intelligence techniques, which aim at handling "knowledge" rather than "data". But so far, not a single droplet of intelligence has been obtained.

Yves Daoust
  • 121
  • 4
  • 1
    All a computer does is change the state of things from on to off or vice versa real quick. – haxor789 May 17 '23 at 10:55
  • 1
    Yes, in fact, the human way of conceptualizing things to make them more intuitively understandable is often why we have difficulty *really* understanding them, since we are approximating them at a surface level and we are confused when the metaphor reaches its limits and fails to predict further behavior. The fascinating thing about computers and related entities is that they do not “do” logic, they more *are* logic. Humans basically found a physical system that they felt could represent or stand in for a certain pattern they knew - so they use the system to embody the pattern. – hmltn May 17 '23 at 12:02
  • @haxor789: I don't see how this comment is relevant to the discussion. What matters is to know whether the changes of state are the results of some thinking or just a huge but predictable/reproducible computation. – Yves Daoust May 17 '23 at 14:06
  • @YvesDaoust What's the difference between thinking and deterministic computation? – Jivan Pal May 18 '23 at 14:14
  • @YvesDaoust What a vague response... Do you mean to say that the answer is obvious (in which case, what is it?), or that you don't know? Do you think there is such a difference? I don't think so. – Jivan Pal May 19 '23 at 00:41
2

Most answers are confused, and the problem is simple.

  • Logic is the FORMAL set of rules that govern reason.
  • Reason is the potential to think.

I know we refer to computers as using logic, logic gates and the like, but is this just us ascribing human capacities to the machines?

No. Machines implement rules. We would be "ascribing human capacities to the machines" if we would say machines "think" or "reason". But they DO implement and apply rules, which imply they apply Logic.

It sounds like a case of us giving more meaning to the machines than they deserve.

Badly expressed, but your point is somehow understandable. If machines implement Logic, and we want to communicate about it, we just do. That is, machines "deserve" it.

J D
  • 19,541
  • 3
  • 18
  • 83
RodolfoAP
  • 6,580
  • 12
  • 29
  • I'm not downvoting, but logic isn't necessarily formal. See https://en.wikipedia.org/wiki/Informal_logic – J D May 17 '23 at 14:51
  • @YvesDaoust Let's take the claim they can't. Do you have a philosophical source that provides a compelling argument, or is that your opinion (which is not to imply you are right or wrong, but merely asserting an original thesis)? – J D May 17 '23 at 17:24
  • @YvesDaoust I have no quarrel with that current AI is not sufficiently architecturally rich enough to create a functionalist equivalent of human-level informal logic, though I suspect it's more a matter of the inconceivability of the task given the realist constitution of the philosophy of mathematics that seems to dominate AI research paradigms. The physical symbol system hypothesis is hopelessly the wrong paradigm since it grounds semantics in the equivalence of other symbols as opposed to the systems of physical computation themselves which is the origin of most semantics obscured... – J D May 17 '23 at 19:05
  • by the confusion surrounding the proper nature and role of dualism. And therefore the philosophical confusion in the Academy over the nature of physical computation is why AI programs which continually conceive of themselves as "technology projects" continue to fail to near broader goals of imbuing systems with dispositions that can be construed as intelligent more broadly. Too few computer scientists understand the failure of big symbol systems to understand that the goals must align more with using embodied systems to model informal logic more intelligently. That IS a philosophical problem – J D May 17 '23 at 19:09
  • As for computing, as some who programs on a distributed architecture, it's only a matter of time before the resources scale up. – J D May 17 '23 at 19:10
1

@Ted Wrigley 's answer is excellent but it's omitting another hidden sub-question inside the original question: Neither the question nor the answer differentiate between what "bare computers" can do and what "programmed computers" can do.

A programmed computer can do anything. It can do all of the "cannots" that have been listed (that is, with enough programming efforts). When does it stop being "just logic" when a sufficient amount of logic gets used to create "reason"? You decide.

However, we might assume that the original question was asking only about what "bare" computers can do. It could be rephrased as "is computers' core design somewhat resembling logic?".

Under that assumption, everything Ted said is valid : Yes, they are built around logic principles, but no, "reason" is not part of the core design.

jeancallisti
  • 141
  • 3
1

Digital Logic

https://cs.lmu.edu/~ray/notes/digitallogic/

Believe it or not, we can model all computational processes (that we know of) by operations from, of all things, plain-old classical logic.

The reference shows how digital gates and other devices in digital computers are used to implement bivalent logic. These devices do not manipulate symbols according to rules because a symbol is an abstract sign recognized by a human intelligence (HI) or general artificial intelligence (GAI) if such entity ever emerges. The devices implement logic functions using physical system states and deterministic state transitions. We map symbols to the physical system states to program the logic gates or the computer which then performs rapid logic functions.

Logical Neural Networks

https://skirmilitor.medium.com/logical-neural-networks-31498d1aa9be

Our first main idea is the use of constrained optimization during learning to guarantee logical behavior of all neurons in an LNN.

Due to their 1-to-1 correspondence with systems of logical formulae, an LNN can also be viewed precisely as a collection of logical statements in real-valued logic (where truth values are not restricted to just 0 and 1 but can lie anywhere in between). In a separate, very fundamental work [11] we have shown that the larger class of logics to which LNN belongs is capable of sound and complete (i.e. correct and thorough) reasoning, to the same degree as has been shown for classical logic [ref: real-valued logic foundations blog].

Thus, like the famous ‘wave-particle duality’ of physics, LNNs can simultaneously be seen entirely as neural nets and entirely as sets of logical statements, and thus able to leverage the capabilities of both the statistical AI and symbolic AI worlds.

Before we can discuss LNNs further, it is necessary to develop an understanding of the computations their neurons perform. The output of a logical neuron is computed much the same as it is for any neuron by applying some nonlinear activation function f : ℝ → [0, 1] to a linear combination of its inputs w · x − θ for an input vector x, weight vector w, and activation threshold θ, as shown in Figure 2. Different from other neural nets, however, is that LNN’s neural parameters are constrained such that neurons behave according to their corresponding logical gates’ truth functions.

This reference refers to the effort to implement human logic and reasoning via so-called Logical Neural Networks. When humans interact with digital computers we can decode the explicit programming code or digital circuit logic functions. If LNNs begin to perform block functions of logic or reason in a limited domain then we have a black box that implements a function of logic or reason.

Complexity

There are problems that define system states and deterministic changes of state, but which are sensitive to both initial conditions and to unavoidable computation error associated with the use of rounding and truncation methods. In these cases if we run the automated computer algorithm, using the same initial conditions, to solve the problem for the path of evolution of the system, then we get different paths each time we run the computation. This gives rise to concepts of chaos and complexity theory in the domain of computational theory.

SystemTheory
  • 614
  • 3
  • 6
0

Monkey-Man tears off an apple by own forces.

Man-Monkey takes a stick to tear off an apple by own forces.

Man create Logic.

Man took a stick to force a human to tear off an apple by his forces.

Man call this stick - Boomstick.

Man create a logic rules to force the robot-computer to took a stick to tear off an apple.

But robot computer is too expensive, then step back.

Man create a logic rules to manage a computer that force a human by logic rules, (that a human accepts) to took a stick to tear off an apple by his forces.

Does computer understand man's Logic?

No, but it use it's rules, same as human.

What ll Man do if logic rules will not works on humans??

Oh, Man still have Boomstick.

L=logic...

"a bit of the old ultra-violence" no one cancels.

  • 1
    "*The guns spell money's ultimate reason*" - poem by Stephen Spender – Scott Rowe May 17 '23 at 10:41
  • @ScottRowe paper and cyfral money as the cause are debt obligations of guns owner. Money or bullets - what is your free choice? – άνθρωπος May 17 '23 at 10:48
  • 1
    It seems that the one gives rise to the other. – Scott Rowe May 17 '23 at 11:29
  • @ScottRowe there are 2 causes for the money - first is "euphemismation": replacing a direct physical threat with a forced to loan; and second - a trick: i give you seashells you give me gold. But anyway seashells are able to change back only for the bullets, not for the gold). So, the violence - two ways of money - and again the violence at the end – άνθρωπος May 17 '23 at 11:43
  • "*Why can't we all just get along?*" – Scott Rowe May 17 '23 at 16:46
  • @ScottRowe Nuclear War will make peace. I think God d thought similar when evented Great Flood, but he was bad scientist and didn't know about atomic decay and ect... – άνθρωπος May 17 '23 at 22:51
  • The cockroaches will be happy to take over the earth after we are gone. – Scott Rowe May 17 '23 at 22:53
  • @ScottRowe following genus should be better this we are, all have similar brown color and stylish mustache... some able to fly! Oh, they are already happy – άνθρωπος May 17 '23 at 22:58