6

If an artificial intelligence system existed in a robot and it was able to constantly reprogram and reconfigure itself in any way without disrupting its necessary functions, and without even partly depending on any external data (including any programmers); then such a system could setup its own goals, arguably describable as an AI 'choosing' to reconfigure itself so it can be in a better 'position' to achieve it's certain self-'established' goals. At least it might according to outside observers seem as though such a system possesses the ability to selectively vary its behavior. So could an A.I. system be able to actively 'reprogram' itself and change itself in ways only 'it' could give specific definite 'justifications' for?

By using the term free will here I mean the A.I. system has the 'ability' to change one or more of its 'internal states' relative to the information content and 'information-packages' it has 'access' to in ways that are NOT predetermined by 'previous programming' or information 'set-up' by any 'programmers' or other 'outside' sources of info. or info.-management. The A.I. system is able to change 'itself' information-wise without ANY 'outside' help or interference. This would be 'like' having the 'ability' to 'freely alter' itself regarding what info. to analyse and manipulate next.

Alternatively, instead of using the term free will with all its philosophical baggage one could call this 'non-external-interfering' system self-variation or 'non-outside-influenced' self variation or auto-cybernetic selective variation. Of course any system that can 'vary' itself in some way must be 'doing' some of the important variations 'itself' without any 'externally sourced' instructions; or else the system would have to be 'told' how to do every type of system change and all its important functioning would be run 'by remote'.

201044
  • 317
  • 1
  • 13
  • 5
    It would have free will only in the sense that we too have free will. That is, the [illusion of free will](http://philosophy.stackexchange.com/questions/849/to-what-extent-do-we-choose-our-beliefs), if you are a hard determinist. – stoicfury Dec 20 '14 at 04:42
  • 2
    I would consider the final question to be sufficiently specific for reopening (but I lack the rep to do so myself). I might suggest changing the word from "justify" to "explain," as I believe the alternate-hypothesis you are trying to disprove is that all AIs actions can be explained using laws of physics and logic. Such a alternate-hypothesis would refuse to admit AI free-will. Disproving such a alternate-hypothesis would open the door to the plausibility of AI free-will. – Cort Ammon Jan 01 '15 at 21:40
  • I've added some inline comments to try to indicate some places maybe to explore further as you revise. In bigger-picture terms it can help to try to narrow the question as much as possible and specify as much context and motivations as you're able. In particular it would help to share a little bit about what you're reading that has made this an important problem to you in terms of your study of philosophy; it can also help to indicate what hypotheses you may formed, and what your research has turned up already. – Joseph Weissman Jan 05 '15 at 17:28
  • 2
    I've added the clarifying comments to the post and nominated for reopening since I think it is clear enough, esp. considering the broad range of questions we accept now in terms of overall specificness. I don't have any issues with the points raised in the supertext except maybe the 3rd one (which interpretations?), but it is minor at best. Novice philosophers can't be expected to know "multiple interpretations" of free will and I think people will be able to add relevant and useful answers which can help this person figure out his/her conundrum. – stoicfury Jan 05 '15 at 17:36
  • 1
    That's good with me – Joseph Weissman Jan 05 '15 at 18:20
  • What is free will anyway? If some type of dynamic self controlling system that was self sustaining had the ability to 'change itself' in some way without being 'forced' to do this by some 'behavioral algorithm' and/or some 'external force' of some kind that is 'already working' you could call this a 'free variation' on it's own 'behavior'. Something like a 'free-willful' action. – 201044 Mar 26 '15 at 01:20
  • I personally think, you answered to your queston already here **So could an A.I. system be able to actively 'reprogram' itself and change itself in ways only 'it' could give specific definite 'justifications' for?**. If such an A.I **reprogram** itself, personally I think you didn't go to some **free will** question..... Because the A.I stands itself without any influence from the outside, which equals with the complete **free will**. –  May 30 '15 at 16:57
  • What on Earth do you mean by **quoting the word 'itself'**. Is it itself or is it not. – Cheers and hth. - Alf May 31 '15 at 05:55
  • I am sorry, **whom are you asking????**. –  May 31 '15 at 06:32
  • @KentaroTomono: Sorry about the context. I meant to ask [the OP](http://philosophy.stackexchange.com/users/12097/201044). Mea culpa,very sorry. – Cheers and hth. - Alf May 31 '15 at 06:44
  • @ cheers and hth; I mention 'itself' as it is a self contained self sustaining info. management system. Some writer's seem to take issue with even using the term 'self' to do with the 'mind'. – 201044 May 31 '15 at 07:22
  • 1
    Ignoring all the philosophical quandries associated with the concept of free will ; can an A.I. system just CHANGE itself in a constructive way without this change being initiated by 'outside' influences. If it can't then Articial Intelligence is impossible.... – 201044 Jun 01 '15 at 14:13
  • Is this true? ( what I said on Jun 1 at 14:13) – 201044 Jun 24 '15 at 03:39

6 Answers6

5

Your question raises interesting ideas in the the areas of free will as well as the nature of consciousness. We might begin by assuming that humans have free will, as many philosophers do. It seems, after all, we are able to make choices, to "vary ourselves in some way without any externally sourced instructions". You seem to recognize that free will in a deterministic universe is a complicated issue, so I won't belabor the point here other than to point out some other good questions we have here on that topic:

What are the necessary conditions for an action to be regarded as a free choice?

Is free will reconcilable with a purely physical world?

What is the difference between free-will and randomness and or non-determinism?

The problem I want to highlight to you (which may be why you asked your question in the first place) is: Can we reconcile the notion of freedom while knowing that a robot we programmed — despite making choices we may not have known it would make — is acting in a way we intended (and perhaps even predicted) it to? If you program a robot to reprogram itself to make itself smarter and give itself its own goals, and it does that very thing, are those "choices" it made, or is it just following a set of rules/guidelines we defined? How many successful code-rewrites must occur before we say the robot is self-generating these actions? Can we ever say that?

This issue you run into here, it is no different from the issue of free will in humans. While our brains are made of more mushy materials, we do seem to exist in a very physical and determined universe. Our actions seem very influenced by our physical brain (if you damage it, we start acting differently, and altering the brain in more targeted ways with drugs seems to also have a profound affect). The influence which is perhaps most important to discuss here though (outside of direct external physical brain intervention) is the influence of the past. As we are raised, our brain is shaped not in some vague, ghostly way but physically in the neurons, and these physical changes (what account for our memories and consistency of self, why we see ourselves as the same self over time and not a new person at every moment) greatly influence how we behave and react to things. It is quite literally our own programming. We like to think we are "programmed" in such a way that allows us to "freely" make choices in our life, but do we?

There are many interpretations of free will (compatibilism, incompatibilism, pessimism, etc.) but if you want to hold a hard determinist view (an incompatibilist view, in contrast to libertarianism (philosophy)) view you will find it difficult to reconcile any notion of free will in robots or humans for the same reasons. In my answer to the first free-will-related question above, you will find my own proposal for a new definition of free will and how I reconcile the concept with human choice, and I leave it to others to explain how they do it under other views (as compatibilists, for example).


Regarding your comment:

Could a functioning A.I. system that can reprogram 'itself', (without causing any 'internal disfunction); could such a system 'reconfigure' various programs and information it is manipulating to 'come up with' a set of programs and/or info. that actually contradicts certain ideas held by its programmers in such a way that would prove the system was NOT acting even indirectly according to how the programmers initially programmed it? In other words can an A.I. system be 'programmed' and when 'used' it 'tells' the programmers some of their fundamental ideas are wrong?

Can you? Can you act in a manner which does not accord to how you were raised and are genetically/biologically programmed? My gut tells me that nothing is random. A system can act in unpredictable ways, but those ways are not random, they are very caused, they are very determined by the previous inputs. So, sure, a system can act in a manner which contradicts its programmers only in that the programmers themselves had a poor notion of what their code would entail. Just as humans can seemingly act "out of character" too sometimes. The programmers thought that their algorithm would result in behavior somewhat like X, but it turned out like Y. The error is with the programmers, they predicted wrong; the system did not act on its inputs in a way that was random, or inherently unpredictable (just perhaps difficult to predict (obviously so for the programmers if they predicted something else)).

stoicfury
  • 11,548
  • 7
  • 42
  • 79
  • Could a functioning A.I. system that can reprogram 'itself' , ( without causing any 'internal disfunction ); could such a system 'reconfigure' various programs and information it is manipulating to 'come up with' a set of programs and/or info. that actually contradicts certain ideas held by its programmers in such a way that would prove the system was NOT acting even indirectly according to how the programmers initially programmed it? In other words can an A.I. system be 'programmed' and when 'used' it 'tells' the programmers some of their fundamental ideas are wrong? – 201044 Jan 08 '15 at 04:35
  • Could a computer system answer a problem in a way that shows its programmers were wrong about some of their assumptions regarding the problem? Could a computer system prove a theorem about computer algorithms that many computer programmers admit they would never thought of? – 201044 Jan 08 '15 at 05:50
  • I would answer both your questions in the same way I answered the first one: Yes, it could (and they do) if it was programmed to. Sometimes it happens because we program things and the unexpected happens. It acts in a way we did not intend, but is beneficial nonetheless. Other times we intend for such things to happen; we program a computer to learn the rules of a system and come up with more robust ways of describing that system (machine learning) than humans ever could in the same amount of time. We do this stuff at work all the time. – stoicfury Jan 08 '15 at 08:05
  • So if an A.I. system solves a problem that shows its programmers were wrong about certain fundamental ideas of computer science including how it was initially programmed which would indicate that its self-manipulating capabilities would now be different from any capabilities it had initially. So any info. self-manipulating it 'does' now would be according to different parameters than were initially set-up. It could solve the problem yet without relying on how its original programmers 'input'. – 201044 Jan 08 '15 at 22:36
  • An A.I. system could conceivably solve a problem in such a way all its programmers previously disagreed with or even contradict some of the ideas of the programmers. Even if the system changes in unpredictable ways or unexpected yet predictable ways the system could manipulate its own info. to solve a problem using methods where the methods themselves contradict the programmers methods or couldn't actually occur given the methods the programmers used. – 201044 Jan 08 '15 at 22:46
  • Could an A.I. system be used to write software for other A.I. systems that are similar ( but not identical) to itself and in such a way that indirectly solve some important algorithm problems. What if it solves some of these problems in ways that show 'its' own programmers were wrong about how they 'solved' these problems? – 201044 Jan 08 '15 at 22:55
  • You are tripping up over the same issue over and over. It all boils down simply to the fact that *everything has a cause*. Nothing is *uncaused*. Could X do Y? Yes, if it was caused to do that, whether by programming accident or whatever. Things do not happen "out of the blue". What we ***INTEND*** to happen may be different from what *actually* happens, but that doesn't mean what actually happened did so because of some lapse in physics (causality). We just predicted wrong. – stoicfury Jan 09 '15 at 02:00
  • I know according to science and physics e.t.c ,nothing is uncaused. I'M not negating that ;I'm suggesting the mental events one 'causes', however they manifest themselves can only truly be predicted if at all by the person who is aware of their own 'mental events' ( however they are able to do that) and possibly predicting where their 'changing mental events' might 'lead' them. Some one from 'outside' the person and their points of view can not have enough info. to determine 'where' the person's 'thoughts' are 'leading'. P.S.; Can I help it if one particular user keeps challenging my point? – 201044 Jan 09 '15 at 03:06
  • Sure, 'mental events' are causes in the sense they are simply one component of a unbroken chain of prior occurrences. Did the kid go to school today? Yes, be**cause** his parents told him to. Why did his parents tell him to? Be**cause** their parents raised them to believe that going to school is good for children. Why did their parents teach them that? Be**cause** they grew up in a society which valued such things.... *ad infinitum*. You can define whichever point you want as the "cause" for the kid going to school, it does not matter. Mental events may be causes, they may not be. (part 1/2) – stoicfury Jan 09 '15 at 21:06
  • The point is that none of these things caused themselves. You seem to want some sort of notion of "free will" whereby an internal process (mental event or whatever) acts upon the mind and "poof", the robot has free will because its mind caused the "most recent" action, not the original programming. This is of course a silly notion if you hold that everything is physical and everything has a cause. Any individual event has an infinite number of causes that led up to it, arbitrarily picking one and calling it "that which grants free will" is entirely unfounded. (part 2/2) – stoicfury Jan 09 '15 at 21:11
  • If nothing can 'cause' itself wouldn't that be a problem for evolution theory as it states the first animate biological systems somehow developed from inanimate systems and therefore the ways the animate systems have to 'organize' their 'internal subsystems' was 'caused' by 'themselves. They couldn't be caused from the previous inanimate forms which are dis-associative structures and/ or not self sustaining. – 201044 Mar 20 '15 at 16:46
  • @201044 - No, in fact the idea that "nothing is uncaused" (or as you put it, "nothing can 'cause' itself" which is also true but only a subset of the greater idea) is ***precisely*** what the theory of evolution builds upon. The theory of evolution posits that gradual accumulations of small changes over time lead to the complexity of life we see now, rather than deliberate design. Like a (natural) waterfall -- it wasn't designed by anyone or anything; tectonic forces shaped the land while wind currents, temperature, and pressure differences direct the movement and formation of clouds. (1/2) – stoicfury Mar 21 '15 at 00:09
  • That the water happened to be pulled by gravity over a ledge and form a waterfall, that's just happenstance, but so very caused happenestance. Things caused it to happen, it wasn't "uncaused". Same goes for evolution... it happened, there were causes, but it was not because it was "designed" *per se*. In fact, the idea that everything has a cause is only problematic for the [Abrahamic God](http://en.wikipedia.org/wiki/Cosmological_argument#Objections_and_counterarguments). (2/2) – stoicfury Mar 21 '15 at 00:09
  • Acting in a non-predetermined way is NOT necessarily a random behavour. One could 'determine' what one was going to 'do' next ad-hoc ,or 'on the spot' using already 'set-up' behavioural' patterns one 'chooses' to do next. In other words one could 'figure out' what one was going to do next ,' on the spot' without 'tossing a coin'. – 201044 May 28 '15 at 08:12
  • I'm not sure what you are getting at. – stoicfury May 29 '15 at 08:29
  • Someone suggested an A.I. system or a self sustaining 'robot' or even people act in a pre-determined way or a random way, no other alternative. – 201044 May 30 '15 at 11:44
  • Can any information management system rearrange it's own important information content in novel recombinations of possibly 'older' ideas or information? – 201044 May 31 '15 at 09:26
  • Maybe the 'process' of rearranging some important information that does not compromise the operating systems in an information management system could be called a 'mental' event within the system in that it is the changing dynamics of the information that is the 'on-going event' not just the final state. And if this rearranging CAN be done in ways that are not determinate from an outside observer viewpoint then this could be termed a 'free variation' of the systems info. content. – 201044 Mar 03 '16 at 04:40
  • When you say “everything has a cause” - I’d think that a decision made out of my free will has a cause - my free will. – gnasher729 Jan 03 '23 at 18:15
  • @gnasher729 What is your "free will"? How does it fit into what we know about the human condition (i.e. that we are physical beings, made of matter, operating under a system of physical laws)? See: https://philosophy.stackexchange.com/questions/966/what-are-the-necessary-conditions-for-an-action-to-be-regarded-as-a-free-choice – stoicfury Jan 05 '23 at 05:45
3

I wonder what the relevance of "constantly re-programming and re-configuring" would be. People don't constantly re-program themselves. And except for some cases that are addicted to plastic surgery, they don't re-configure themselves. (Actually, that isn't quite true. If you eat some food and then get sick, your brain might re-configure itself to dislike that food).

Intelligence and free will are totally separate. Imagine an AI that I could tell "please write 50 symphonies in the style of Mozart, just a lot better", and it would do that and have no choice not to do it, that AI would have an awful lot of intelligence and no free will.

An artificial intelligence is complex. You ask for an AI that is not "pre-determined" by something a programmer programmed. However, even a primitive chess program makes moves that the programmer of the chess program didn't foresee. An AI would constantly do things that the programmers didn't foresee. Do you do anything that isn't pre-determined by your genes? How do you know? It's the same with an AI. We don't know what "free will" is. We don't know if it exists, either in an AI or in a human being.

That aside, electronics seems to be a lot easier than brain surgery. So an AI would probably have a better chance of improving its "brain" than a human has. Actually, we know much better how to improve computers than brains, so obviously an AI that could acquire that knowledge could learn how to improve its brain while being clueless how to improve your brain.

Actually, a sufficiently developed robotic AI with an internet connection could probably find a job as a computing consultant (forging ID and pictures), open a bank account, make money, order parts online to improve itself and so on.

gnasher729
  • 5,048
  • 11
  • 15
  • One constantly re-programs themselves whenever they slightly alter some 'behavioral algorithm' to 'make it better fit' some potential activity the person is contemplating. Any slight change in the patterns of info. one is 'trying' to use and as such trying to remember these changes is a type of re-programming. One reconfigures ones 'lists' of potential info. patterns within ones 'mentality' and even one can reconfigure ones physical structures in ones mind-brain if one uses some types of behavioral training including informally. – 201044 Jan 08 '15 at 23:06
  • Your brain stays inside its pre-determined parameters all the time (assuming you're not using brute force). – gnasher729 Jan 14 '15 at 17:37
  • But the 'mind' can rearrange or reconfigure the information content of any of it's 'behavioural algorithms' it has 'access' to as long as this 'algorithm' remains functional. So the 'mind' can 'think outside the box' regarding going 'outside' its 'own' functional parameters. – 201044 Jan 16 '15 at 06:24
  • @ gnasher729 ; What about plasticity where part of the brain is damaged and other 'parts' help to take over 'its function'; these other parts of the brain are not staying within their predetermined parameters all the time? – 201044 Jan 25 '15 at 06:08
  • Plasticity could not 'work' if the mind-brain system was ONLY a physical brain. It requires a type of functional reprogramming with some kind of cognitive mapping of the way any damaged neural ' circuits' should be that could not be 'housed' only in physical structures. Such 'mappings and relevant corrections could be ' housed' in the dynamic functional operating systems of the 'mind-brain'. – 201044 May 31 '15 at 08:25
1

No, there will never be artificial "intelligence" that operates outside of the programming us humans have designed. Computers are nothing but a tool, something that carries out instructions. They don't think for themselves and they aren't capable of doing anything that we have not told them to do. Any adapting and "learning" going on here has to be a product of human-designed programming and it is limited to that programming.

YogiDMT
  • 175
  • 5
  • Computers are nothing but tools; yet some other writer here suggested computer and electronic ' evolution' is 'improving' so much the great singularity event that computer scientists talk about , where some A.I. system can 'think' and it will surpass it's programming and ' make up it's own'. I personally do not think any machines can evolve; they can not propogate, at least not yet. – 201044 May 31 '15 at 07:59
  • Any adaptability or "intelligence" a computer has, has to have to have been programmed by a human. It's "intelligence" is limited to whatever we decide to give it. Is a robot "learning" by using data is receives to change it's programming and adapt itself, technically, but still that behavior itself is something we programmed to begin with. Computers can never be smarter than humans because they aren't smart to begin with, they just do what we tell them to. – YogiDMT May 31 '15 at 08:12
  • 2
    Well what about the ' so-called' singularity event predicted by many computer scientists? – 201044 May 31 '15 at 08:18
  • It doesn't exist. Any programming a computer can ever hold is a product of human design. Even if complexity evolves to a point where it cannot be recognized, it's still, at the fundamental level, founded upon human design. Computers, which are only just a vehicle, will never be "intelligent", even if humans project some of our intelligence onto them. – YogiDMT May 31 '15 at 08:38
  • So you disagree with a lot of computer scientist. Isn't the whole point of Artificial Intelligence research to make a system that can program itself? Just because a information management system uses ideas and process that were 'previously' established by people in the past does not mean the system isn't capable of NOVEL recombinations of older ideas. Newton said he could see so far because he stood on the shoulder of giants. That does not mean he ( and Liebniz ) did not ORIGINATE calculus. Calculus is a novel recombination of older ideas. – 201044 May 31 '15 at 08:49
  • The programming to program itself is still a product of human design though, it's still limited to the code we give it. There's nothing "smart" about a computer's ability to "learn" or adapt. The computer is simply carrying out our intelligent designs, it's not thinking or making any decisions on it's own nor is it doing anything or changing any part about itself that we did not design it to do. – YogiDMT May 31 '15 at 16:41
  • Well if we design a computer system to be able to change 'parts' of itself, parts of its own programming and it can't do any such changes on 'it's own' then it isn't really designed to be able to change itself without some programmer 'looking over it's shoulder' or the programmer's previous work 'telling' it what to do. So this would imply A.I. won't work.. – 201044 Jun 02 '15 at 01:30
  • Could a computer system rewrite some of its programming codes so it could solve a problem no human being has yet solved? Could someone write a set of programs that could solve , say the four color map problem in a much 'faster' way than the way it was solved? – 201044 Jun 20 '15 at 05:34
  • The OP asked if a computer is capable of operating outside of it's design. It can't because all a computer is design. Computers are the product of human design. There's no magic here. Yes it can solve things we haven't solved yet and find better ways to do something through but it would be through brute force. Computers aren't intelligent, they just follow our instructions really fast. – YogiDMT Jun 21 '15 at 06:36
  • No ,I asked if a computer system can be designed to be able to rewrite or alter some of its own programming , in ways that obviously don't sabotage its proper functioning. Isn't there a computer programming challenge that exists today for the first person to write a program or set of programs that can interact and 'come up ' with some new math theorem nobody has thought of? – 201044 Jun 24 '15 at 01:38
  • Yea i dont see why either should be outside the limitations of a computer given the right programming but again the program isnt doing anything original or outside of what the code allows here. When you talk about free will and the computer acting in ways that couldnt have been predicted that isnt possible because computers dont think for themselves they just execute human made code. – YogiDMT Jun 24 '15 at 01:51
  • But can human made code cause a computer system to make 'productive' connections between sets of information that might not be predictable ,at that given moment, to any human 'observing' the computer? – 201044 Jun 24 '15 at 01:55
  • Possibly but again you would have to write the program in such a way that makes that happen. There's definitely varying levels of efficiency when it comes to code but at the end of the day it's almost always brute force trial and error when it comes to computers. – YogiDMT Jun 24 '15 at 02:00
  • Everything a computer does is something that a human can theoretically do too. It's just a matter of time and effort. – YogiDMT Jun 24 '15 at 02:04
  • Yet my point is is if it's possible to write codes for a computing system that not only gives the computer the ability to rewrite or alter its programming but allows it to keep rewriting and altering its programming in productive ways non-stop. The original human programmer starts it off or initializes it and gives it the ability from that moment on to keep reprogramming itself. – 201044 Jun 24 '15 at 02:11
  • Sure, anything's possible. But such a task would be extremely tricky for the programmer. Everything the computer does is a direct result of the code a human has written. – YogiDMT Jun 24 '15 at 02:15
  • Anythings possible; so if a programmer did this and made the computer able to program itself for the next 5 years say , it could come up with all sorts of NEW or not-yet-used programs .Churning out programs that may be better than anything previously written by humans. Of course as you say the one initial human programmer could claim full responsibility for all the following 5 years of programming even if he admits many of the resulting programs he never would have thought of.... – 201044 Jun 24 '15 at 02:37
  • He might have never thought of certain logical paths specifically but the design of his program has to be in such a way that allows for this "new" programming to eventually be reached. So while a future variation of the program may not have been specifically thought of by the programmer, it's still a variation that is the product of his initial code. – YogiDMT Jun 24 '15 at 04:01
  • ' the design of his program has to be such a way that allows for this 'new' programming to be reached. So while a (much) future variation of the programs may not ' be from the programmer , it' still a variation of his work. This implies the originators of say calculus and their versions of the calculus 'code' , or set of fundamental principles is a 'code' that allows any future variations on these ideas to exist. So any future theory of calculus is primarily the originators responsibility or 'property'.... – 201044 Jul 14 '15 at 20:18
  • I think you're trying to overcomplicate things here. Computers are literally just machines that carry out our instructions, that's it. Anything it does is the result of human logic. It doesn't make intelligent decisions nor does it think for itself. There is no such thing as artificial intelligence although i'm sure that in the future there will be some pretty complex and pretty clever human logic instilled inside these machines. – YogiDMT Jul 15 '15 at 01:30
  • So artificial intelligence IS an oxymoron! I asked that question on one of these stack exchange sites and got lots of disagreement. – 201044 Jul 16 '15 at 13:37
  • I asked it on APRIL 3 at 3:47 ; if Artificial intelligence was an oxymoron. I actually asked it on this philosophy site. – 201044 Jul 16 '15 at 13:40
  • 1
    Yes i would say artificial intelligence is an oxy moron. – YogiDMT Jul 16 '15 at 23:42
  • It doesn't really matter if any new programming of a computer system can be traced back to it's programmers initial codes ; the possibility of a computer system able to reprogram 'parts' of itself without causing system failures would be VERY useful indeed. Also no programmer could 'see' every possible contingency an information programming system might 'encounter' so there may be certain 'programming environments' that such a system might have to 'handle' itself.. – 201044 Aug 15 '15 at 11:47
0

You ask,

[…] could an A.I. system be able to actively 'reprogram' itself and change itself in ways only 'it' could give specific definite 'justifications' for?

Disregarding the like-that-but-not-quite quotes, it appears that you meant to ask

Could a machine intelligence be able to actively reprogram itself and change itself in ways that only it could give specific definite justifications for?

And this is apparently in the context of

'ability' to change one or more of its 'internal states' relative to the information content and 'information-packages' it has 'access' to in ways that are NOT predetermined by 'previous programming' or information 'set-up' by any 'programmers'

Well, the idea of a directly programmed intelligence, one where each main function of the mind was implemented by a programmer, was common in the 1970's. But while it's nice as a goal to gain better understanding of how e.g. vision processing works, it's wholly impractical as a way to create an intelligence. As already Alan Turing (1)noted in 1948 or so, the most likely way a machine intelligence is created, is the way that a human intelligence is created, namely by growing up and learning – with some basic instincts and abilities in place, of course.

So a first answer is that the question as posed doesn't make much sense, because it's very unlikely that a machine intelligence will be of the directly programmed variety, that there will be any programming (except of basic functions such as edge detection in vision processing): I'd guesstimate that it's about as likely as a crocodile emerging up through the asphalt in the street, deftly stealing your wallet, only to be hit by a giant iron hippopotamus accidentally dropped from an airplane passing above.

However, running on a digital platform means the possibility of making copies, partly or completely. It means the possibility of trying out things in simulated environments. Not the least it means that explorations of possibilities can be really, really fast. Currently the electronics is some millions times faster than our brain stuff, and that difference increases exponentially, which it's kept on doing since 1965 or so, roughly a doubling every 2 years. So we can expect some really fast evolution as soon as machine intelligences start creating new ones.


1) Mainly section 9 “The cortex as an unorganized machine” of his 1948 technical report “Intelligent Machinery” (the link is to my transcription of photocopies of the original manuscript).

  • Some other writer here said I should use the term 'justify' instead of 'give reasons for'. – 201044 May 31 '15 at 07:24
  • Electronic 'evolution' is certainly going faster and faster but there are some things computer systems ( the way their 'evolving' might never be able to do). A computer can't lie or distort info. without being programmed to. And being 'told' what to 'say' by a programmer ( previously through the programming ) is not the same as a self-initiated lie. A computer is not error tolerant as much as human beings are. A computer has every relevant term it uses strictly defined and computes with brute-force methods. The Asimo robot calculate trillions of variables a second just to 'walk'. – 201044 May 31 '15 at 07:35
  • @201044: You can gain a better understanding of this by studying programming. I recommend starting with [Python](http://www.learnpython.org/). As an alternative you can do Coffescript/Javascript. E.g. see [JSFiddle](https://jsfiddle.net/), and there are lots of such places where you can try things out and cooperate with others. – Cheers and hth. - Alf May 31 '15 at 07:43
  • The people who first wrote all the codes for the programming of Microsoft computers ,are they then responsible for any future codes written by any programmers or computer systems in the future? – 201044 Jun 24 '15 at 15:21
0

Yes, to be intelligent an entity has to be capable of self re-programming. There is no way around it.

You have to look at what comes before intelligence. It is knowledge. Knowledge is about acquiring set of rules. Once you know rules about something you are knowledgeable in it.

It all begins with data. Arrange data in some form, sort it, categorize it, find its limits, average it etc; you now have information.

Analyze information. Generalize it. You find rules that always work from the data. You find rules that never work. You find rules that usually work. You find rules that work only on selected data i.e. you discover ranges of your functions (functions are rules) and/or only on a particular kind of data i.e. you discover domain.

The rules you gather is your information base. As opposed to your database which is just collection of information.

Data -> Information -> Knowledge

The rules are your functions, your formulae, your equations, your theories, your laws. All different names of same general concept.

However, this is not enough. Often complete data is not available. There are big holes in data and we must act before all the data is available.

Intelligence to the rescue. It fill gaps in information. Rules not known are conferred. Given how much data you have and how good you are in finding rules from that data you may be pretty good in guessing rules you dont know.

Now, if an organism is intelligent, it can add rules in its information base that dont come directly from data. This adding of rules necessarily change internal program. The entity is capable of re-program itself.

Behaviour ofcourse is how one is programmed. Re-program definitely means changing of behaviour.

This is as much freewill as we get. Within confines of our initial programming, our instincts we are capable of re-programming ourselves. There is a wide range for us to re-program in.

We can change our habits, learn new skills, even get rid of old habits and forget what we learned.

This is the path any A.I. will follow because its inherent of that last word: intelligence.

Atif
  • 1,074
  • 10
-2

For a true artificial intelligence, yes, it must have what you call a free will. For a simulated artificial intelligence, it's not necessary.

I call it the ability to change its cognitive parameters in order to learn, understand or resolve. Also, as said previously by gnasher : Intelligence and free will are totally separate. An AI is defined by its components, hardware and software, and you need to think from this perspective : a free will doesn't mean anything for a set of transistors and chips. It's built to compute. Thinking requires a lot of computations, and what you call free will, I call it engineering and applied (computer) science - in the context of what we know an AI primarily is.

For a given unknown parameter, variable or problem, it needs to think in order to learn how it works, understand why it works, and eventually apply the reasoning to solve what is unknown to "it" yet, and integrate it. One way to integrate it would be to reprogram to optimize itself, without having to go through the process of thinking again, but applying directly the solution in a similar situation.

A free will doesn't mean anything for something made out of the hardware and software we know. It's all about data and results, through processing.

"I think therefore I am".

Six
  • 1
  • 1
  • 2
    Could you provide some reference for this? This is a not a board for exchanging opinions, but rather for a scientific approach to philosophy. –  Jan 06 '15 at 10:16
  • 1
    Not a good start I see :) Not every answer has to be academic and 100% objective with well known references. If you remove the capacity of thinking on its own, what remains ? Of course it's also opinion based. I obviously lack knowledge in philosophy (even if I find it interesting), but IA is also one of my domains. Won't be my last try here anyway, I'll be back ! :) – Six Jan 07 '15 at 06:30
  • Sure, please, keep looking around. Perhaps also take a look at the [help], to learn more about how this board works exactly. –  Jan 07 '15 at 12:36
  • 'Free will doesn't mean anything for something made out of the hardware and software we know.' This is apparent right now unless some Think Tank or a mysterious group is hiding a new type of A.I. system. I was asking if an A.I. system COULD be developed so that it can reprogram or reconfigure its operating system or any 'inherent' programs so that it , as a self contained self controlling 'program - manipulation - and - management' system ; it could 'change' its behaviour in ways NOT predictable to any of its programmers? – 201044 Jan 08 '15 at 04:16
  • 'Some sort of notion of free will.. where an internal process.. acts upon the mind and "POOF" the robot has free will ..and not the original programming.This is OF COURCE a SILLY notion if you hold that everything is physical and.. has a cause.' Someone wrote this recently and it assumes the 'mind' (even if you consider it a physical system that can manage and manipulate patterns of info. represented as various 'brain events') ; the mind can not cause any event 'itself' because the mind is not 'physical' or the mind can't do change anything without relying on the 'programming' of the brain. – 201044 Jan 10 '15 at 06:18
  • If the 'mind' is an info. and info. algorithm manipulation and management system where it can 'change' some of the info. and info. programs it has 'stored' and it can actively 'make' a new or novel combination of any previous info.-algorithms (whether this 'new algorithm' is entirely predictable by 'older' info. or info. programs it has 'stored' or not); if this 'mental' system can 'make' new algorithms it can actual act on them it can 'change itself and its 'behaviour' in ways ONLY it can give any 'reasons' for. Nothing else could know enough 'supporting details'. – 201044 Jan 10 '15 at 17:06
  • This self manipulating 'mental' system would seem potentially unpredictable to 'outside' points of view because they can't have enough 'predictive info.' to make accurate predictions of what the 'mental system' will do 'in the next 5 minutes'. So to outside observers it would seem as though the mental system is operating with some kind of free will or 'freely self-variable behaviour'. Even tough the person or 'mental system' themselves could have enough info. to 'make' predictive models of what they want to do next. This in fact does happen when we are 'thinking'. – 201044 Jan 10 '15 at 17:14
  • Maybe Descartes should have said something like 'I am in the active process of thinking now and for at least the next 5 minutes ; therefore I am right now and for at least the next 5 minutes an existing being. – 201044 Apr 03 '15 at 03:55
  • @ yogiDMT; I didn't say one future variation of the program ,I said the computer system did 5 years worth of programing , possibly 100's of programs. The initial programmer COULD claim he is responsible for all these hundreds of programs but if the computer system has been given the ability to reprogram itself for its many 'self-established' programming projects that it keeps running over and over for 5 years I don't see how the initial programmer could take the credit for ALL that. – 201044 Jun 24 '15 at 04:09
  • What about all the people who helped to start basic arithmetic and basic algebra; they started the 'codes' of arithmetic and algebra that many mathematicians in the 1600's and 1700's and so on, took on making new theories or 'codes' of mathematics. You couldn't say the ones that help originate arithmetic and the arabic number system are solely responsible for all the future theories or codes of modern mathematics. – 201044 Jun 24 '15 at 04:16
  • If no computer system or algorithmic 'calculating' system can EVER vary it's own behavior without being 'indirectly prompted' to do so by some person or persons at some time in the past (relative to the system now) what exactly is the point of computer systems. Are they ONLY very fast information processing devices that might give the ILLUSION of intelligence as in a Turing Test. Also if a Human being's mind can be sufficiently approximated by the 'concept' of a computer system how can this human mind initiate anything 'new' on it's own? – 201044 Jul 22 '15 at 02:56
  • A true artificial intelligence has just as much free will as I have. No more, no less. Well, if you define a “true” artificial intelligence as something that has all my qualities. – gnasher729 Jan 03 '23 at 18:20