Today's electronic digital computers are often referred to as universal Turing machines. That is, the concept of the UTM is used to understand today's stored-program electronic digital computers. But is this concept adequate? In fact can today's computers do things that UTM's can't do? If they can do more than a UTM can do, it could be important to AI, since AI (and Searle and his Chinese room argument) use the UTM concept to define the abilities of AI's research and development platform - the electronic digital computer.
-
Turing machine is an abstraction, and electronic computers are physically implemented, so they can perform physical actions and affect physical things, like keyboards and monitors, they can visualize their outputs, while Turing machine can not, they can do equivalent tasks much faster, etc. However, *theoretically* any program a computer can execute can be *in principle* executed by a (physical realization of) a Turing machine. – Conifold Feb 06 '18 at 00:58
-
3@Roddus As a mathy abstraction a Turing machine has infinite tape, so your question should probably be the other way round :) – ngn Feb 06 '18 at 03:53
-
Today's computers, no. However, there are problems that have answers that today's computers can't solve. It's theoretically possible someone will create a "magic" computer (that looks very different from a Turing machine) that can answer these questions. – Feb 08 '18 at 18:51
-
Create heat. Take up space. Break down. Run out of memory. – Feb 09 '18 at 02:40
-
@jobermark Running out of memory i'd agree with. But a TM would be really easy to make (more tape being added whenever needed) and the motor would generate heat, the machine would take up space,... I was thinking more about random access memory and actually receiving input. And anything else? – Roddus Feb 10 '18 at 10:29
-
@Conifold It's always seemed strange that Turing machines (as per Turing's 1936 paper) have no real-time input. The one that prints 0 1 0 1... has no input at all. The Universal machine has its S.D. (program) loaded and is input in this sense, but once the machine starts running the program, there is no input. (If there were, symbols would have to appear in empty squares as if by magic). I just wonder if there is something important about this lack of the possibility of real-time input in a 1936 Turing machine. – Roddus Feb 10 '18 at 19:41
-
@ngn My interest in whether computers are Turing machines comes from Searle's Chinese room argument. He understands the electronic machine with the concepts of the universal Turing machine. But is the electronic digital computer a UTM? It's not (as I understand it) in the sense that UTMs have no random access memory (only Left, Right, one square at a time) and have no real-time input. But is there a program that will run on the electronic machine but which no Turing machine could run (ignoring RAM and a UTM's lack of input)? – Roddus Feb 10 '18 at 19:48
-
@ngn cont. What about (in Basic) A$=B$ in other words copy the contents of variable B$ and store the copy in variable A$? The program has no idea what B$ contains. Can a TM perform a copy operation without identifying the symbols to be copied? Or in C: a=*b, take what is stored at b and assign it to a. Or in assembler, mov EAX, EBX, copy the contents of register EBX to register EAX. in all these programs the thing to be manipulated is not identified by the program. Can a Turing machine manipulate a symbol without identifying (scanning) it? – Roddus Feb 10 '18 at 20:09
-
@Roddus a TM [can](https://cs.stackexchange.com/questions/22418/what-is-the-difference-between-ram-and-tm) simulate random access, only not as efficiently. I don't understand what you mean by "without identifying the symbols" in your last comment. Obviously, a TM must know at least the source/destination offsets on its tape in order to do copying. – ngn Feb 10 '18 at 21:25
-
Whether the input is in real time or pre-typed on the input tape is not relevant to the nature of computation. Issues with formalizing inputs and outputs for TM were discussed on [Math SE: Input and output of a Turing machine](https://math.stackexchange.com/q/952547/152568). – Conifold Feb 11 '18 at 23:00
-
@Conifold That's interesting about the nature of computation. I'll look at the reference. Why is the source of the symbols not relevant to the nature of computation? (intuitively the source seems irrelevant, but can the intuition be explicated?) I'm thinking of symbols received by the TM from an external system, not symbols Printed on the tape by the TM itself (which could be considered input to the executing algorithm, but the source of this "input" is the TM itself). – Roddus Feb 12 '18 at 00:38
-
Computation is by definition what transforms the input into the output, where they come from and where they go is abstracted from. You can assume that "the world" already scribed all the symbols on the tape that is fed to TM. For that matter, you can imagine that said tape is fed to it "in real time", but that makes no difference to what it does with it. – Conifold Feb 12 '18 at 00:47
-
@Conifold So when a meat grinder grinds meat, does that mean the meat grinder performs a computation on the meat? What I'm trying to get at is can a computer process tokens that arrive from sensors without performing computations on them? Is whatever a computer does to the input defined as "computation"? Or is "computation" defined in some other way without reference to what computers do? If so, the question "Do computers necessarily compute" Is empirical. And maybe they can process what the process without computing on what they process (and this has implications for the CRA). – Roddus Feb 12 '18 at 22:21
-
@ngn By "without identifying the symbols" I mean can a TM do stuff to symbols - process them, manipulate them - without identifying them. I'm thinking about a computer program that can do this, e.g., var2 = var1. This copies whatever is in the variable (whatever is stored at the location named) var1 and writes it to var2. Can a TM do this sort of "blind" copy? Or does it have to identify the symbols in var1 then execute conditionals where the Scan operation is successful. E.g., if "1" is in var1 then does the TM have to execute the Scan conditionals: if "0" then ...; if "1" then ...? – Roddus Feb 12 '18 at 22:34
-
@Roddus When you say "blind", are you trying to describe something like [fully homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption#Fully_homomorphic_encryption)? – ngn Feb 13 '18 at 00:08
-
Computation is processing of symbols not matter so I am also not sure what your analogy is getting at. I am not sure what "tokens that arrive from sensors" are but what difference does it make of principle whether we consider them to be the input or add an extra layer of computation and view them as processed. Computation is defined as transformation of inputs into outputs according to transcription rules. Those are approximately implemented in physical computers, everything else that they do is irrelevant in this abstraction. So your empirical questions are not about computation. – Conifold Feb 13 '18 at 01:17
-
@Roddus Turing machines are not physical objects. Period. If you built a physical model of one, it would not be one, any more than a machine that generates permutations *is* a group. As such they do not have a computation speed, and so linear and random=access memory are equivalent. – Feb 13 '18 at 19:08
-
@ngn something like blind signing a homomorphic encryption. – Roddus Feb 14 '18 at 07:36
-
@Conifold the tokens that arrive from sensors are the objects digital sensors emit and which travel to the computer ('symbols'). Thanks for a definition of computation. I'll think about it. I'm trying to look at the physical machine as a causal system, and the concept of the symbol brings with it a lot of linguistic baggage (that Searle makes great use of). But if a computer can manipulate an input 'symbol' without transforming it into an output symbol according to a transcription rule, is the computer doing something other than computing on the symbol? – Roddus Feb 14 '18 at 07:58
-
@jobermark I understand that the concept of the TM is in effect the definition of machine computation. I'm trying to look at the electronic computer as a causal system and ask, well, can it perform sorts of processing on symbols that TMs can't? I mean is there a fundamental difference? Or does the UTM fully explain the abilities, the possibilities, of the electronic device? – Roddus Feb 14 '18 at 08:07
-
@Roddus both TMs and computers can do fully homomorphic encryption – ngn Feb 14 '18 at 09:54
-
@Roddus We have a formal proof that bit-manipulation and UTMs are fully equivalent except for speed. Regarding speed, the primary difference would be the availability of randomness. Modern machines do not use it, but quantum machines would leverage it in spades. We also have NTM=DTM, so in principle a nondeterministic machine can be no more powerful than a deterministic one, but the combination of parallel processing and true randomness can theoretically be much faster. So 'can do' is still not adequately defined. – Feb 14 '18 at 17:01
-
This is the narrow meaning of "computation" in this theory. If we are using an analog computer then it can employ a black box causal process for the transformation. As long as we are empirically certain that the result accords with our specifications we do not need to worry as to what is inside the box. But then it is not susceptible to theoretical analysis as a computation either. – Conifold Feb 14 '18 at 20:46
-
@ngn just taking the idea of blind signing a ciphertext, suppose a machine at a post office reads the addresses on the outside of envelopes that arrive through the post, then shoots the envelopes into different bins according to some element of the address, say country. That machine presumably computes on the addresses, but does it compute on the symbols sealed inside the envelopes? Presumably not. How is this different from the following variable assignment: a$ = b$? Does the program compute on the contents of b$? – Roddus Feb 16 '18 at 07:24
-
@jobermark "bit-manipulation and UTMs are fully equivalent". OK. Could you give a link to the formal proof? – Roddus Feb 16 '18 at 09:35
-
@Conifold so "empirically certain " means an inductive inference? And the specifications are Skinnerian stimulus-response (input-output) type specifications? Computation = " transformation of inputs into outputs according to transcription rules". That's helpful. So for the analogue black-box computer the rules are unknown, and inductive inference implies invariant rules and hence invariant input-output pairing. – Roddus Feb 16 '18 at 09:46
-
Here is a basic example. https://drona.csa.iisc.ernet.in/~deepakd/atc-2008/murec.pdf. (But, all told, looking up basic coursework for undergrad math classes is something you should just do for yourself.) . Given that all bit manipulation instructions are recursive functions, this applies to the whole range of digital machines. Analog machines fall under a different rubric, and to the extent they logically process Real numbers rather than integers and allow for real randomness they are not covered by this result. – Feb 16 '18 at 15:36
-
@Roddus FHE is more than the mail sorter you describe. It can compute on the content of the envelopes without opening them. – ngn Feb 16 '18 at 21:43
-
More or less, we could of course have a physical theory for what goes on inside the black box but it may be computationally (in the abstract sense) intractable to employ it. Neural nets are often used as black boxes even though in principle they could be analyzed computationally. – Conifold Feb 18 '18 at 21:21
-
This discussion of reference and identification falls apart when you realize that most languages are going to assign $a=$b by letting the two symbols point to the same address. The symbols a and b and the address have been 'identified' but there is no concept of their contents at all. The question is so ungrounded that it is meaningless to ask whether and how 'the contents' are 'identified' or not. – Feb 19 '18 at 20:22
-
We use UTMs as a model to remove all of this ambiguous talk, since the same things can be accomplished so many different ways that we cannot know exactly what we mean when we talk about something as simple as 'assignment' -- Does it move the contents? Does it change pointers? Does it just clue the compiler that two things are the same, without any real effect at execution time? Won't the language just choose that by context? Sticking to the most constrained and least powerful model keeps us from saying pointless nonsense. – Feb 19 '18 at 20:25
-
@jobermark Thanks. So the idea of assignment abstracts away from causal specifics. Or assignment can be realized in different causations. I suppose some would say that that's the main value of abstraction (and the language will indicate the specifics by its context). And Turing machines (UTMs) are the most useful? ultimate? most accurate? abstraction of the stored program electronic digital computer? But it still seems possible that UTMs don't reflect (or whatever the right word is) everything computers can do. There seems to be an independence between the UTM and the electronic device. – Roddus Feb 24 '18 at 08:30
-
@Roddus only to the extent that device has chaotic or analogue components. To the degree a digital computer is **digital** it is isomorphic to a UTM. Electronics that is not digital is not generally considered a computer in modern parlance. There is a certain degree of refusing to understand what you don't want to admit going on here. – Feb 26 '18 at 16:29
-
@jobermark This is a basic question I'm interested in. The concept of the UTM is used to understand the actual particular electronic devices, computers. Could other quite different concepts be used to understand these same devices? Is the UTM the only idea we have with which to understand the machine? What about the concept of association of particulars? Could the machine be usefully understood as a device which creates instances of associative relationships? – Roddus Mar 02 '18 at 00:07
-
It could, and if those relationships are between discrete entities, the result would be provably isomorphic to a UTM. So why not look at a UTM as a device which creates instances of associative relationships? *Oh, right. People already have*, through relational and universal algebras. The paper I cited gives five very unrelated ways of looking at these phenomena, all isomorphic to UTMs, but having different strengths. Did you even follow the link? Do you care to listen, or should we all just stop talking? You seem bent on just not hearing. – Mar 02 '18 at 00:51
-
So no, there is not one way to look at computers: recursive functions, lambda calculus, general deductive systems, universal algebra, the theory relational forms, the category of cartesian closed categories, the abstract semantics of open types, the topology of marked discrete tilings of a plane, etc. etc. etc. are all ways of looking these phenomena. And they are all isomorphic to UTMs. But each one of them lets us look at these things from a different vantage point and makes different problems harder or easier to depict. – Mar 02 '18 at 01:04
-
The advantage of the UTM is simply that it is easy to specify and has very restrictive rules. So for issues of absolute possibility and impossiblity, it is the earmark example (Turing was focussed on proving an impossible problem impossible). For issues of comparative performance, people usually fall back on recursive functions (Kleene was concerned with constructive and applied math). For issues related to abstract descriptions of behavior and relative complexity of expression, they often fall back on relational algebra concepts like combinators.... – Mar 02 '18 at 01:08
1 Answers
Short answer is no; modern computers cannot do things that Turing machines can't do. What they can do is run very sophisticated, complex Turing machines that simulate things that Turing machines would not be able to do.
This is an important point; Artificial Neural Networks, Genetic Algorithms, Fuzzy Logic Algorithms, and all the other types of 'machine learning' and 'artificial intelligence' mechanisms that computers can execute are still deterministic and algorithmic in nature. They still run a strict series of commands, and most importantly do not deviate from those commands no matter what. They're good at solving NP (Non-Polynomial) problems only insofar as they emulate the process that a human would undertake, but that emulation isn't perfect and may never be. Even the 'random' elements in a Genetic Algorithm isn't truly random; you seed an ordered (but seemingly random) list of numbers with a starting value and every call for a random number is taken from this list from a point determined by the seed value.
In that sense, computers can accumulate new data and can use that data to drive their behaviour, but only according to the rules of a Turing machine. In that sense, the computer cannot extend itself beyond its programming, even though it may seemingly emulate that capacity by the way it uses the data it accumulates.
There is some debate going on now as to whether or not a quantum computer will break that model, but then if it does, then it's not a 'classical computer' and we will have only used the name 'computer' for it to extend a metaphor that we already understand. That said, whether or not a quantum computer can operate in a non-deterministic way will be a very interesting question to try and answer in due course.
- 1,497
- 2
- 9
- 17
-
Tim B II You say "...computers can accumulate new data and can use that data to drive their behaviour, but only according to the rules of a Turing machine" But what about instead of "drive their behaviour", we say *determine* their behaviour. New data can determine behaviour - but doesn't that contradict Turing's theory of the Turing machine? A UTM runs a description of a TM. This description (program) defines the input-output behaviour of the UTM, whatever that behaviour might be, present, past and future. But can't new data change that definition of input-output behaviour? – Roddus Feb 06 '18 at 01:02
-
1Hi Roddus. Your explanation is exactly why we can't say *determine* their behaviour. Even Neural Networks are Finite State Machines, and how different values trigger movement between those states is defined in advance. The sheer complexity of the code behind a Neural Network can make it appear that new data can change the programmed output but the core program doesn't change, and if you run the same data through it with the same random seed, you'll get the EXACT same outputs, every time because the process is still deterministic. – Tim B II Feb 06 '18 at 01:20
-
Hi Tim B I'm not sure I completely understand your comment, but how about this reply: The machine has a simple program. All it does is take the first two input symbols to arrive and stores them in locations 1 and 2. All further input symbols are compared to the one at loc. 1. In the event of a match, the machine emits as output a copy of the symbol in loc. 2. Isn't the machine's behaviour dependent on the input data, not the program (or not just the program)? Running a new instance of the program will almost certainly result in different machine behaviour (the input stream is variable). – Roddus Feb 06 '18 at 01:35
-
I understand what you're saying, but the issue I have with it is that programming takes into account variables that have different values at different states, including input. the outputs will change according to the inputs, yes, and even the steps that the program takes are different, but the *scope* of the program has not changed. That you change the inputs only means that different parts of the same program run at different times, not that the application has changed *how* it would handle either input – Tim B II Feb 06 '18 at 01:41
-
But the program doesn't change. And the whole small program runs. It doesn't know (or care) what is stored in loc. 1 or 2. But the behaviour of the machine does change (is different from session to session). Hence the machine's behavior can't be explained by the program. Something else has to explain it. The something else is the content of the input stream (and by extension, if it came from a sensor, then the something else is the sensed environment). So the machine has learned from its experience. So does this mean that the idea of the Turing machine can't explain learning from experience? – Roddus Feb 06 '18 at 02:16
-
2No, actually the machine hasn't learned from experience, it's reacted in a predetermined way to changes in the input data. That the program is at least in part responsible for the change in the input state is irrelevant. The program would ALWAYS do the same thing to the same inputs, regardless of how those inputs were created. This is why hacking is possible; computers don't actually *know* anything about the data; it's an input to a set of instructions and while we may observe what looks like data dependent behaviour, it's merely value driven conditional processing. – Tim B II Feb 06 '18 at 02:25
-
Turing in his 1950 paper raised the issue of learning, saying that learning is a case of the rules of operation of the machine changing, and how was this possible since the rules of operation (program) are fixed from the start no matter what the machine's history might be. (Then the posited "ephemerally valid" rules to explain learning - whatever they might be). My small program doesn't change and treats all input the same way. Is this what you mean? That learning is a change in rules, and if no change then (by definition of "learning") learning doesn't happen? – Roddus Feb 06 '18 at 02:35
-
Yep, that's exactly what I'm saying. When I read Turing's paper, I'm accumulating knowledge. When I integrate that knowledge into my understanding of algorithms and how computers work to change the way I do or think something, that's learning. Another way of putting that is that a HDD full of data hasn't 'learnt' anything more than a blank HDD because neither HDD can change its behaviour as a result of the data contained within. Neither can a computer program running against either HDD, it just does what it's programmed to do against a defined input. – Tim B II Feb 06 '18 at 02:43
-
1On your point that the program would ALWAYS do the same thing to the same inputs. That's right. Presumably the neural processes (programs) that process neural inputs always do the same things to the same inputs, too. But the point is that depending on the inputs, the MACHINE does different things. The program never learns. It's the *machine* that learns. In other words, the machine brain is more than hardware plus software. There's something else there. And learning is embodied in this extra thing. (I made this comment just before your last one) – Roddus Feb 06 '18 at 02:45
-
While I agree with your synopsis, it's a can of worms I've opened many times on this site. Mathematically, it's possible that our ability to learn and make our own choices is all an illusion. Empirically, I agree with you; something more is happening. Regardless, the machine we call a computer cannot extend itself past its programming although we can emulate that phenomenon pretty convincingly. The question then becomes whether or not we're merely emulating that same phenomenon so convincingly that we've even fooled ourselves, but that's WELL beyond the scope of your question. :) – Tim B II Feb 06 '18 at 02:54
-
And possibly my understanding. I've spent the last 18 years including 10 in post-grad research trying (foolishly?) to explain what the extra thing is. It's been hard getting out of the computational mode of thinking about the problem. Hence the question, Can computers do things TMs can't (TMs being probably the accepted definition of machine computation). Just off-hand, I'm not sure Turing machines can run the simple program described above. They have to scan ("identify") a symbol in order to manipulate it, but the small program doesn't, in part, seem to do this. – Roddus Feb 06 '18 at 03:12
-
1Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/72747/discussion-between-tim-b-ii-and-roddus). – Tim B II Feb 06 '18 at 03:16