Talk:The Origin of Thought
--Edmiao 22:01, 22 April 2007 (MST) um. huh? manamana. de dee be dee de. manamana. dep deede deep. manamana!
I would define intelligence as creativity. I think someday you could make a computer that could have so much programming as to be able to respond to any input appropriately and appear to hold a conversation, but it would lack creativity. It could not imagine something new and bring that idea/thing to reality. manamana! do doo de do do. manamana. do doo do do. manamana! do do de do do de doo do doodle oo do doo de do.
--Jason 13:46, 23 April 2007 (MST)I think you have hit on an important distinction here. I am not convinced that creativity is intelligence, but I certainly think its an integral part of intellect. Its one reason why I am not sold on much of Jared Diamond; his contention that primitive peoples are (or could be) more intelligent as a whole than more advanced societies strikes me as clearly untrue because they lack what I consider to be the most important piece of cultural intelligence: innovation. If they had it, they wouldnt be primitave anymore!
But with the Turing test, it doesnt really hint at creativity at all. How can we abstract creativity? Really, in many ways it is reproducable. It will be tied in with the tags as I have mentioned previously. Its a main core of my thesis, just like humor. These things arise because of an ability to reference entries marked with certain tags, and then to create a reference in a way that is not standard.
For example, we find something to be interesting or creative (according to my theory) when it is presented as long as our mind sees the product and recognizes its equivalence to a tag previously generated, then the reference is recognized. Items are most profound when we agree with the generated reference yet it was not a reference we ourselves created. Its almost like a subconscious 'Oh, I hadnt considered that, but its totally true' reaction inside yourself.
I know this isnt perfectly formulated yet, but Im hoping to get it more concrete, which is why I made this page. I appreciate hearing your comments. They help me to think about these things more deeply.
--Edmiao 13:56, 23 April 2007 (MST) May I ask, from where does this interest in artificial intelligence arise?
--Jason 14:07, 23 April 2007 (MST)Its a combination of two things. I took a philosophy of artificial intelligence seminar in college and it was my favorite class ever. It really got me interested in philosophy, and that interest has only grown over time. The whole thing really came together about 2 years ago when I had this breakthrough thought about how I believe humor is created inside of us and why we like it. So when I spend time thinking about it, I might as well see if I can coalesce it into something tangible. Maybe that guy with the question marks on his suit can get me a government grant to write a book about it. hehe
--Edmiao 14:33, 23 April 2007 (MST) Interesting. (btw, some of your jargon makes it hard to read for the layperson such as me "tags" and "references". i get it but have to first say, huh? oh, i get it.)
On your primitive peoples: I have no doubt that they are innovative and intelligent, they just have a lower technological development than other humans. It would be incredibly difficult to compare their intellectual capacit with other humans.
I just skimmed the Turing test on the wikipedia and it seems like a pretty inadequate test for intelligence. it is a test of language and I don't think that the ability to use language equates to intelligence. It seems well within the bounds of likely hood that a computer could learn language and how to converse on any topic that you gave it enough data on. Likewise the same with people, we can converse intelligently on any subject we are versed in, but when you start to talk to someone such as myself about the nature of intelligence, I only spit out nonsensical strings of words in a vain attempt to sound intelligent. a computer could easily do that.
Where does the line between intelligent and not intelligent lie? is this intelligent? Is a bacteria intelligent when it uses chemosensors to direct motility towards food and away from noxious stimuli?
As to humor, perhaps humor requires intelligence, but intelligence is not determined by the capacity for humor. Imagine a culture that has no humor (like the vulcans in star trek; sure they are fictional but could exist); those people would not be defined as not intelligent.
--Jason 15:34, 23 April 2007 (MST)I disagree about primitive peoples. If they innovated they would invent new things, and as they continued to invent eventually they would cease to be primitive. If it could be shown that these peoples had existed for a significantly shorter period and therefore havent had the time to create the necessary innovations, that would be a different case. Yet as far as I know, they have existed for as long as any other group of people, yet instead of inventing new things and stretching their boundaries they continue to live as previous with little change. Of course it could be argued that this is due to some cultural over dependence on tradition. That wouldnt make the individuals less intelligent, but when I spoke of them I wasnt speaking of individual intelligence, I was speaking of cultural intelligence, which is certainly lower (by my definition).
About the Turing test, Im not sure its just a test of language. The machine must respond in such a way as to be indistinguishable from a human being. If it can carry on a conversation in this manner, thats much more than just language. Its recognition of humor and idiom, grammar and syntax, forwarding of opinion, and anything else that may occur in a conversation. What this really attempts to understand is what makes us different from a giant database? Thats the important question here. Once we can agree what that is, we can then determine if its something we can abstract and reproduce, or if it is somehow mystical.
As of yet, I dont know that answer. I dont even have a well formulated belief.
--Edmiao 16:24, 23 April 2007 (MST) this is proof positive that you have too much time to think about philosophy because you have escewed gaming. think you're coming back ever?
--Jason 16:45, 23 April 2007 (MST)Philosophy is very important to me. My problem with gaming is I cant handle any kind of conflict right now. Any kind of argument or disagreement just frustrates me and can make me unreasonably mad. So, I dont know. I will start gaming again someday, but its hard for me to decide if Im coming back there because the group is so large. I want to play in a group of 3-4 people and no more. Still, you guys are my best friends so that weighs heavily in your favor.
--Edmiao 17:12, 23 April 2007 (MST) btw, we're going to see spider man at the imax saturday 1:30 opening weekend. see the current events page.
--Gdaze 10:03, 24 April 2007 (MST) I always thought the problems with computers is that they can't come up with how to deal with things not programmed into them. However, this was a long time ago... And I know NOTHING about programming.
I've wondered that about primitive peoples myself. Why did some cultures expand their scientific knowledge while others didn't? I think in some cases, like Rome and China, they stopped because they didn't really need to. They just didn't see any reason to expand... Maybe this is why primitive people do it as well? I mean just because a culture doesn't become more advanced doesn't mean it can't, maybe they just have no desire to? I dunno, when I try to say ideas, they come out all scattered at first. Anyway, AI and human intelligence are really interesting topics. The guy who is making the power armor in Japan basically says that AI is impossible, so instead let’s augment humans. Get this... his company's name is Cyberdine, and the suit is called H.A.L.
--Edmiao 10:28, 24 April 2007 (MST) that is so awesome. but its already been done, see the armored human in project grizzly. I think that technological advancement is driven by several components. First, ther is the ability to progress that is limited or enabled by the genetics that define your intellectual maximum capacity. Second, the potential must be realized; this requires a culture and society that encourages intellectual growth and emphasizes technological improvement over maintaining the status quo. Third, the rate of technological development is probably highly dependent upon the need for it, so if food is plentiful, and your culture is such that the population is stable over time, then there is likely no strong drive to create technologic advances. On the other hand, if your society is competing with neighbors for resources and has a growing population, then technologic advancement will be strongly selected for.
Ben here: despite Jason's misgivings, I'd suggest you and gabe read Guns, Germs, and Steel. "The Third Chimpanzee" also by Jared Diamond is fairly interesting although it doesn't discuss this exact topic (while GGS does).
--Jason 11:34, 24 April 2007 (MST)That book has a lot of interesting and well researched info concerning what Ed was talking about. I dont dislike the book, I just disagree with a few premises from the foreward, the stuff in the actual meat of the book is pretty interesting and solidly presented.
--Edmiao 11:47, 24 April 2007 (MST) i've heard of that book, i'll have to read it. from skimming the summary on wikipedia, it sounds like he investigates intelligently what i was suggesting off the cuff (with no evidence besides racism, bias and superstition). is it an engaging read? I have this bad habit of choosing to read books that i think i should read without realizing that I'm out of the habbit of reading in general, and then it takes me forever to finish them. I recently read "Tale of Two Cities" over like 6 months. Now I'm reading Anansi_Boys by Neil Gaiman which seems like it will only take me a week and a half to finish.
--Jason 12:00, 24 April 2007 (MST)I am a terrible reader so it takes me forever anyway. I just basically read a chapter here and there in that book and skip to what seems interesting. Its what I do with philosophy books too. I just picked up Dune off of Skippys shelf and started it, who knows if I can finish it.
--Edmiao 12:04, 24 April 2007 (MST)I always liked Dune. Just learned that Neil Gaiman was a guest writer for Spawn and created Angela, Cogliostro, and midieval spawn.
--Jason 12:10, 24 April 2007 (MST)Yeah, and he got into a creator war to keep his copyright on them with dickhead Todd MacFarlane too. I think they settled out of court. Im about 50 pages into Dune and I can groove on the Paul Atreides parts but its hard for me to pay attention to the Harkonnens or Bene Geserit.
--Edmiao 12:16, 24 April 2007 (MST) I'm working so hard. So Gaiman also wrote the Sandman graphic novel series. I should read that, and the Watchmen too. Been a long time since i read Dune. But I would think you could identify with the Harkonnens, isn't the baron a morbidly obese boy loving prick?
--Matts 12:17, 24 April 2007 (MST)Not to add anything to this debate, but ultimately the difficulty with "artificial intelligence" (while I'd argue that intelligence isn't synonymous with human thought, we'll go with the flow here) is that computers as they exist are really bad at pattern matching; that's not to say that they can't do it, it's just to say that the human brain somehow has specialized mechanisms for the kind of probabalistic modeling that allows us to recognize context at a glance. A computer is notoriously bad at that sort of recognition, usually only achieving it with reams of iteration, which it's extremely good at, but doesn't scale well to large problem spaces. The human mind has this capacity to make a snap judgement about when we've recieved enough information to move forward. Implementing those sorts of controls in a program is exceedingly difficult, but I'm pretty sure that's fundamentally what machine learning is all about.
There's some thought that with quantum computers we'll be able to more accurately model the non-deterministic nature of the human brain, but given that we understand so little about it, that seems like a long shot.
As for "thought", I'd define it (dismissively and unfairly) as the computational process of our brain. More specifically, the computational process our brain goes through when adapting to a problem that's difficult for its mechanisms. Hence, (and with no real proof or rigorous analysis to go along with it) I'd say that a computer is indeed "thinking" when it's iterating over a large set of data.
That still leaves totally untouched the question of "where does synthesis come from".
--Jason 12:18, 24 April 2007 (MST)Ive heard he is actually a necrophiliac too, which Im totally in to, like the rest. I just cant forgive him for installing anti-grav platforms to levitate his fat. I figure once you have spent your whole life achieving such an amazing girth, the least you can do is treat your feet to the pleasure of bearing the weight.
--Gdaze 12:20, 24 April 2007 (MST) Ah I shall try to. I've been reading a book on the history of the tattoo in Europe... but it is overdue actually... It also takes me forever to read anything. Whooo my meds are making me really drowsy.... Need to sleep. And that grizzly armor hardly counts... H.A.L. actually allows the user to not get as tired quickly. Although we are working on several as well. H.A.L. actually isn't fully armored though, more like a exo-suit.
--Jason 12:24, 24 April 2007 (MST)I think Ill get at Matts comment more when I get deeper into this project. My contention is going to be that the way our brain recognizes context, and therefore exhibits intelligence, is by returning references to keys. We can join ideas or occurrences in our head and bring back the reference because we have generated a wide range of keys. Our brain also has the capacity to reference these keys and determine if a key is relevant even if it isnt equivalent.
That last piece is something more akin to what you mention, something that is difficult if not close to impossible for computers today to model. I dont think its functionally impossible, just currently impossible.
The next step of learning is, of course, rekeying that entry to add the newly created key. Now it is immidiately able to be referenced under this key, it no longer requires any computation (probably related to the way we do regular expressions or pattern matching).
--Edmiao 12:47, 24 April 2007 (MST) So, as Matt says, computers aren't very good at deciding what data is the most important and what is irrelevant and when to stop looking before deciding. In sum, computers are not good at doing what our brains do, as of yet. Now consider that the first computer was built in 1941 or so, which makes the evolution of computers 66 years old. the evolution of humans is what, hundreds of thousands of years or more? I think the question is not is artificial intelligence possible now, it is a question of is AI ever possible. Thus, you must take as a given any and all possible advances in computing. Consider the current rate of improvents in microprocessors. Now give them 10,000 years.
The question becomes a religious one. Is human intelligence the sum of firing neurons that generate brain activity, or is there something ellse driving it? Brain activity is generally believed to be the sum of neuronal activity. If this is correct, then eventually (within the next 100,000 years) someone will make a computer that replicates the web of neuronal connections. Each neuron sending out an axonic signal that reflects the integrated input it recieves from hundreds of other neurons. The question will be: is that thing intelligent?
The answer to the AI quesiton could finally determine whether humans have souls.
--Jason 13:18, 24 April 2007 (MST)That last bit is a very strong statement, and I think its stuff like that which scares people into irrational belief and reaction, eg. the religious right.
I agree with Matt that with current systems its very difficult to do things the way that is being discussed here. There are even neural net computing machines. And computers have been around a while, yet within them relational databases (which is the structure I am modelling on) have been around a relatively short amount of time, roughly half as long as the period you specify. What that suggests is that it isnt just hardware that will improve and grant us new computational abilities, but also modelling techniques. The inefficiency is more rooted in the poor searching algorithms than anything else. Look at how quickly google searches thousands of databases.
Does thought equal soul? Does self awareness equal soul? What the hell is soul? These are important questions.
--Edmiao 14:21, 24 April 2007 (MST) This is a soul and I think we should all have one, from time to time.
--Gdaze 14:22, 24 April 2007 (MST) So when the robots do take over, and are huddling us into camps, at least we can spit in their faces and say "Well, at least I have a soul you monster!" That is a good point though Ed, about intellgence and souls. And for that matter, what will their rights be? I'm sure we can program in a way though so we don't get revolts all the time. This kinda reminds me of the M.O.M. implants in Rifts...
--Dieter the Bold 15:03, 24 April 2007 (MST) I recommend Visions, by Michio Kaku. He's a quantum physicist that makes predictions of future technology development in the fields of matter manipulation, biotechnology and computing. He has a big section on A.I. and goes over some of the current projects being worked on (1997 is the publication date). There seem to be two main approaches to A.I.: the top-down, programming every "if -> than" rule possible into a supercomputer and letting it go from there, and bottom-up, programming really simple little robots to interact with each other and modify their own behavior based on those interactions. Michio thinks that the breakthrough will come when they meet in the middle.
--Jason 15:49, 24 April 2007 (MST)I read one of his books back in the 80's, Beyond Einstein. He is a pretty good writer.
I think all of this is getting lost on the actual point here. When does a simulation actually become the real thing? Thats a little more general of a question, but its the essence of what we are attempting to achieve here. How do we define this threshold?
--Edmiao 18:25, 24 April 2007 (MST) that's a very different question, not if, but when. computers will get more and more advanced, they will simulate human behavior, and how do we kwow whether they have become sentient or not?
--Edmiao 18:31, 24 April 2007 (MST) Another thought. the difference between us and chimpanzees on a genetic level is like less than 1% or something. Chimps can learn and communicate in sign language. On a global scale, the difference between chimp intelligence and human intelligence is tiny. However, that tiny difference means that humans rule the world while chimps are a side show. What if we make AI and overshoot on it's intelligence and make it smarter than we are? In fact, it would be very difficult to match AI to human intelligence and it is more likely to eventually far overshoot our intellect. That sounds really really scary.
--Jason 10:22, 25 April 2007 (MST)According to those who spend their life on AI it isnt as obvious as it seems to you and me. I agree with you. I think simulation of intelligence is not just possible, but inevitible.
I just did a little reading on the chimp DNA thing, and Im obviously not an expert, but here is what I gather. Of 24 genetic markers chimps share 23 with us. Of those 18 are structured pretty much the same, while the remaining 5 are structured quite diffrently. So in reality that 98.5% human thing is a bit of an advertising lure. Another consideration is that some types of worms share about 16 genetic markers, but that doesnt make them human at all. As of now its also believed that Orangutans are smarter than chimps. Even if we take them, however, are they really even as functionally intelligent (by my definition) as Forrest Gump (IQ approximately 80)? Yes its a movie, and much of it is farcical. Yet when we see the things he does, some of it is luck, some is movie bs, and some is the difference between us and other animals. Can we simulate that?
Just like with your super-intelligent computer idea. Of course its scary, and my knee jerk reaction is to think its probably somewhere we shouldnt go. But the more I think about it, the more I believe that just because it has great capacity for advanced thought, that doesnt mean it has opinion, even if it has self awareness. The next step is even if it has opinion, that doesnt mean it has desire. Of course, the real problem in that case lies with those who have access to its controls.
--70.103.113.80 12:35, 25 April 2007 (MST)Straight out of Gabe-ville[1]
--216.64.164.103 13:22, 25 April 2007 (MST) Sweet, I totally would like that, I really like this typo though "Japans is hooked on androids, with several companies selling robots that..." JAPANS! With an S! Also I'm diggng the IV eatery with waitresses that wear nurse outfits with bunny ears.
--Jason 13:36, 25 April 2007 (MST)Come now, lets be realistic. Who needs bunny ears when you can have this[2]
--Justin 17:21, 29 April 2007 (MST) I am really pissed off right now. I just had this huge paragraph of brilliant information and insight that I was typing here, something about experience, souls, creativity and the fact that there is no new information only recycled energy essentially putting you all in your respective places and I lost it all when I hit some button that caused my page to refresh. FUCK YOU UNIVERSE! nevermind...
--Justin 17:22, 29 April 2007 (MST) I'll try it again later...
--Gdaze How about instead of that, you move back here so we can have another player AND your yummy creamy... fill up my mouth... ice cream.
--Edmiao 12:33, 30 April 2007 (MST) so much for intelligent life on this planet. it just got creamed.
--Jason 12:35, 30 April 2007 (MST)Would it be too cliche to say beam me up? Of course with James Traficant in jail, someone has to say it!
--Jason 15:17, 30 April 2007 (MST)Ed, this[3] seems to be exactly what you're afraid of. Some of their logical leaps can be quite frightening here. Im getting more interested in this the deeper I go. This[4] is the essence of 50% of Gabe's characters.
--Gdaze 11:16, 2 May 2007 (MST) I derail intelligent conversations with but a mere flick of my wrist. That aside, that transhumanism stuff is pretty interesting. I read an article on it though, saying how of course a lot of these procedures would cost a lot of money. This would lead to not only a gap between resources... or rich and poor. But would eventually lead to an even greater gap in ability then already exists. The rich would make their babies smarter, stronger, etc. I mean who are you gonna hire... the guy who went through a community college, then normal college, or the guy who went through college who has a Intel Giga Mark V Brain implant with Eagle Eye (TM) eyes? So as neat as it sounds and as much as I like the idea of it, it also would cause a lot of problems I think.
--Jason 11:47, 2 May 2007 (MST)I dont see the problem. Its just more evidence pointing to the obvious conclusion that rich people are inherently better. Welcome back to the middle ages, we've missed you!
--Gdaze-- Well d'uh! Otherwise, why would god give them money if they weren't better? Now what we really need is for the rich to have powerarmor, alia Nercomunda. Is that how you spell it...?
--Jason 12:49, 2 May 2007 (MST)Necromunda. They were called Spyrers, and I think there were 3 types. I dont remember all of the names, but one was Orrus. Why cant I be rich? I wanna be better than everyone else.
--Matts 14:10, 2 May 2007 (MST)Anything Ray Kurzweil says is retarded. That is all.
--BenofZongo 14:29, 2 May 2007 (MST)That is NOT all. Everything Matt Smith says is ALSO retarded. NOW that's all.
--Jason 14:48, 2 May 2007 (MST)Why doth Matt Smithicus hateth Ray Kurzweil? After reading his wikipedia page it doesnt make him seem like a complete moron. So I have to agree with Ben, at least in principle.
--Matts 17:58, 27 July 2007 (MST)I heard the other day about a drug that blocks the formation of memory. What's interesting was that with this drug in the system, access to memory was also disrupted; the memories, when accessed, would disappear. Apparently this is a fairly old study, but what it suggests is that our brains reconstruct our memories every time we recall them. That is to say, when you talk about the "creation of entities" in our brain, what are entities? What ultimately constitutes data in our brains? A significant part of any memory we have is actually our brain reinterpreting in real-time some small core of data, which means we've basically "learned" to remember that specific event.