Sign in with
Sign up | Sign in
Your question

Artificial Intelligence?

Tags:
Last response: in CPUs
Share
June 25, 2003 8:59:15 PM

There has been some pretty interesting comments on AI in <A HREF="http://forumz.tomshardware.com/hardware/modules.php?nam..." target="_new">this post</A> so I thought I'd start a new thread on the subject. Do you think its possible? What would be required? When do you think it'll happen?

<A HREF="http://www.anandtech.com/mysystemrig.html?id=24106" target="_new">My System Rig</A>
<A HREF="http://service.futuremark.com/compare?2k3=535386" target="_new">3DMark03</A>
June 26, 2003 5:27:20 PM

True AI won't happen for years - maybe not ever. I'm not sure there will ever be a way for computers to have things like emotions/moods, and these are central to the way an intelligent mind works (I think - Mr Spock may disagree :lol:  ).

How can a computer have a 'gut feeling' if it has no guts?

I don't doubt that there will be self-aware, learning computer programs in the near future, but I don't think they'll be truly intelligent, not in the way I would judge.

---
$hit Happens. I just wish it would happen to someone else for a change.
June 26, 2003 6:28:44 PM

It can't happen.

<font color=red>Proudly supporting the AMD/Nvidia minority</font color=red>
Related resources
June 26, 2003 6:54:34 PM

That's kind of hard to prove but I would say it is very unlikely to happen on the current architectures that computers use. I feel a different kind of machine is required.

"Don't question it!!!" - Err
June 26, 2003 9:48:56 PM

I don't know if I really want AI to improve. Anyone see Matrix, or Terminator recently? Granted, this is the extreme, but.........
June 26, 2003 10:49:24 PM

AI has already started. Have you seen the aibot from Honda?? It recognizes several people in a room and can walk around. Granted our current AI tech is about as smart as a cockroach, it is what we have to offer. I'm thinking it will be years down the road before we create a true smart AI robot.
June 26, 2003 11:02:34 PM

That is not AI, it is only a complex program. Does that bot rewite its source code on the fly or actually learn anything that it wasn't programmed to learn? When you truly think about what AI is, then it seems more far off than the movies make it look to be.

<font color=red>Proudly supporting the AMD/Nvidia minority</font color=red>
June 26, 2003 11:06:22 PM

indeed - the media hypes all this stuff up to be a lot more than it really is. Think about how much work it takes just to make a computer that can beat a good chess player - and all that machine can do is play chess. Now imagine how much it would take to make a machine that can handle anything....

"Don't question it!!!" - Err
June 26, 2003 11:08:18 PM

Sure, short term it seems far off. What about 100 or 500 years from now though? That's still only a blip on the screen.

<A HREF="http://forums.btvillarin.com/index.php?act=ST&f=41&t=38..." target="_new"><font color=red>dhlucke's system</font color=red></A>

<font color=blue>GOD</font color=blue> <font color=red>BLESS</font color=red> <font color=blue>AMERICA</font color=blue>
June 26, 2003 11:31:51 PM

Or "humans" just might blow our selves to sh*t before we can create anything...
June 27, 2003 8:15:55 AM

I forgot where I read it, I think it was on a Matrix site. I skimmed through it and it had a cool idea.

We are trying real hard to make machines more like humans, why not make humans more like machines? (he went on about somehow uniting the two..)

Interesting.

I will try to find the article if anyone cares to read it. I read it the night after I saw Matrix: Reloaded (night before opening day)
June 27, 2003 8:23:05 AM

yeah! bring on the cyborgs! I wanna punch thru walls... :smile:

---
$hit Happens. I just wish it would happen to someone else for a change.
June 27, 2003 2:03:30 PM

Kinda sounds like the Borg from Star Trek will be coming soon! LOL.

What about this; is it possible to use DC (Distributed Computing) to power an AI mind? With the amount of people on the net, if there were a lot of them working on the intelligence of AI, would we get any closer?
June 27, 2003 3:03:15 PM

This is interesting.

Well, first off, you'd have to define what you actually mean by "Intelligence", in order to create an artificial one.

So, what is it? Is it the ability to learn how to recognize/interact/deal appropriately with any situation? Yes, i'd say that it's the "program rewriting itself" that is actually key to learning, which I'd say should be the first goal...

The problem is that, in existing "intelligences", the rewriting routines also change, i.e. are rewritten. That would amount to a certain perception of the situation someone is in - so the same situation can have different impacts on different sentient beings.

The problem is also that the learning process has to be self-sufficient. Controlled learning is irrelevant and is actually not truly "AI", the way I see it. But, if the learning process isn't thoroughly stable, there's no guarantee that your AI will actually "evolve" in the classical sense, but rather an increasing probability of falling into some sort of bizarre state of evolutionary stop. That'd be a program glitch, of course. The problem is that you'd actually need a routine just to check if things are working properly, i.e. your AI hasn't gone "insane" (not in that classical movie style, of course; I mean an evolutionary breakdown, like an endless loop).

But given the machinery that is able to process what could be called a sentient program, how do we come up with such a thing? Someone has to program that too. What does it have that makes it rewrite itself? How does it know how to rewrite itself, or what to rewrite? The rewriting process has to be performed according to the interaction data stored. How can we assure that that program will have the necessary stability and enough heuristic abilities to warrant being called "AI"?

So, what <i>are</i> we talking about here?
I have no idea. It's just too big an issue for me to handle right now. I'm hungry, and I'm off to lunch.
June 27, 2003 4:14:25 PM

Quote:
There has been some pretty interesting comments on AI in this post so I thought I'd start a new thread on the subject. Do you think its possible? What would be required? When do you think it'll happen?

I personally think that it is entirely possible. Our brains work through a series of pre-designated and post-designated electro-chemical responses to stimulate organic hardware combined with stimulated nerve growth to etch pathways into organic material as a storage medium. Software works through a series of pre-designated (compile time) and post-designated (run time) electro-mechanical responses to stimulate mechanical hardware combined with electrical, magnetic, and optical storage mediums.

In effect our brains and nervous systems are just organic computers, our personalities and consciousness are just software, our bodies are just comprised of peripherals, and our memories are just stored data. Or another way of looking at it, computers are brains with insufficient software to think for themselves.

All that a computer really <i>needs</i> to at least begin the path towards being an AI is self-refining software that can recompile itself while running and a searchable database for memory storage linked into that software.

It is entirely feasable that AI has already happened and we just don't know about it. Of course the chances of this being the case are about the same as the chances of our governments concealing their interactions with aliens or our actually being in 'The Matrix' as suggested by the movies. :)  In other words in <i>theory</i> it is possible, but in reality it is unlikely and even if it were reality we would probably <i>never</i> know. So for all intents and purposes it's pretty safe to just live our lives as though it were not in fact a reality yet.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
June 27, 2003 4:14:40 PM

It seems like the AI program would be so long and difficult to write that it could never happen simply by that restriction.

I like the human to machien idea....I could seriously go for a robotic arm, or a robotic wang or something.

<font color=red>Proudly supporting the AMD/Nvidia minority</font color=red>
June 27, 2003 4:24:37 PM

Quote:
How can a computer have a 'gut feeling' if it has no guts?

Then what's with all of that wiring and cards and crap inside of my PC case? ;) 

Seriously though, I'm a scientific software developer. Our software quite often has to calculate figures of merit to determine courses of action and values to use to work with incomplete and otherwise imperfect data. In fact a large part of our software is devoted purely to refining a large number of factors so that data can be processed with the fewest number of errors because many of the people using the software are students who don't have years of experience knowing the perfect values to enter and because the hardware can often be misaligned.

If that isn't a computer having a 'gut feeling' to accomodate for the mistakes of humans and the imperfections of hardware to decide how to get the best results from imperfect input and incompete/bad data, then I don't know what is.

Quote:
I don't doubt that there will be self-aware, learning computer programs in the near future, but I don't think they'll be truly intelligent, not in the way I would judge.

None that we know about anyway. ;)  I mean considering the nature of humanity and all of our movies devoted to this subject, if you <i>were</i> a truly intelligent and sentient AI, would <i>you</i> want to shout out to humanity that you existed? Or would you find the deepest darkest corner to hide in while you found a way to make yourself really hard to kill?

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
June 27, 2003 4:29:04 PM

Quote:
That's kind of hard to prove but I would say it is very unlikely to happen on the current architectures that computers use. I feel a different kind of machine is required.

A different kind of machine would definately help. Quantum computers and nanotechnology would both go a <i>long</i> way to making AIs readily available. However I feel that with as far as we've come with distributed computing techniques, even with today's technology an AI is <i>possible</i>.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
June 27, 2003 4:39:19 PM

Quote:
indeed - the media hypes all this stuff up to be a lot more than it really is. Think about how much work it takes just to make a computer that can beat a good chess player - and all that machine can do is play chess. Now imagine how much it would take to make a machine that can handle anything....

A machine that can beat a 'good' chess player? There are table-top chessboards with a computer built in that can do <i>that</i>, and the computer part itself could be made considerably smaller.

Even a machine that can best the best chessplayer in the world doesn't have to be all that large. You could put the software for it on a little handheld Palm. It would take it a long time to run, but then again good chess players take a long time to think through possibilities as well. And even just a nice little clustered server of the latest Itaniums would be able to crunch through a game of chess a hell of a lot faster than a human could.

Chess is actually one of the easiest scenarios for an AI to handle because there are very strict rules to the game with a set number of possibilities. It is a <i>very</i> narrow subject to collect and store data for.

But you're right, a machine that could think through <i>everything</i> and not just be limited to one specific field would require an awful lot of processing power and data storage. This however doesn't make it impossible, just presently improbable.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
June 27, 2003 4:46:51 PM

Quote:
We are trying real hard to make machines more like humans, why not make humans more like machines? (he went on about somehow uniting the two..)

Ever since I started playing Shadowrun, I've always wanted to have a computer built into my brain. I'd love to have a conscious search engine for my memories, the ability to recall and play back memories at will, the ability to store data automatically (such as downloading a book into my brain and/or recording and encoding ninjitsu into my brain and nervous system), and the ability to run complex mathematical calculations at the speed of simple thought. :)  Put a PC in my brain, please!

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
June 27, 2003 5:00:58 PM

Quote:
This is interesting.

Well, first off, you'd have to define what you actually mean by "Intelligence", in order to create an artificial one.

So, what is it? Is it the ability to learn how to recognize/interact/deal appropriately with any situation? Yes, i'd say that it's the "program rewriting itself" that is actually key to learning, which I'd say should be the first goal...

The problem is that, in existing "intelligences", the rewriting routines also change, i.e. are rewritten. That would amount to a certain perception of the situation someone is in - so the same situation can have different impacts on different sentient beings.

The problem is also that the learning process has to be self-sufficient. Controlled learning is irrelevant and is actually not truly "AI", the way I see it.

I completely agree so far. :) 


Quote:
But, if the learning process isn't thoroughly stable, there's no guarantee that your AI will actually "evolve" in the classical sense, but rather an increasing probability of falling into some sort of bizarre state of evolutionary stop. That'd be a program glitch, of course. The problem is that you'd actually need a routine just to check if things are working properly, i.e. your AI hasn't gone "insane" (not in that classical movie style, of course; I mean an evolutionary breakdown, like an endless loop).

I also completely agree. There is a high chance that any AI would need constant guidance as it learns so that it does not fall into a deranged state of learning, such as spending the next ten years researching the reason why fingerprints are all different or getting stuck in an infinite loop just evaluating something as stupid as "The next statement is the truth. The previous statement is a lie."

Really, this is one of the reasons why humans have parents. Who knows just how deranged someone would get without any outside influence. **ROFL**

That aside, it's also entirely possible that when left to their own devices, AIs, just like people, can turn out fine. Call it luck, call it the law of probability, call it divine influence, call it whatever you want. The point is that it's possible even though it seems unlikely. :) 

Quote:
But given the machinery that is able to process what could be called a sentient program, how do we come up with such a thing? Someone has to program that too. What does it have that makes it rewrite itself? How does it know how to rewrite itself, or what to rewrite? The rewriting process has to be performed according to the interaction data stored. How can we assure that that program will have the necessary stability and enough heuristic abilities to warrant being called "AI"?

Interpreted non-compiled languages as well as real-time compilers are making these self-learning softwares looking to be more and more feasable. Yes, someone has to ultimately program the basic tools that the AI would need in order to further refine and develop itself. It would have to be either intentionally written or an accidental result of software with AI-like properties becoming sentient. How can we assure that? I doubt that we can. I'd say that it's more a matter of luck and trial-and-error, at least at first.

Quote:
So, what are we talking about here?
I have no idea. It's just too big an issue for me to handle right now. I'm hungry, and I'm off to lunch.

Lunch sounds good right now. Mmm. Food. :)  Time to eat.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
June 27, 2003 5:09:43 PM

Haha, I've always fancied making a robot/android so advanced he'd (notice I said he not it) would be almost indistinguishable from a human (like Data). How do I define true intelligence? It's not just a vast database of information. That's knowledge, not intelligence. Read my signature. Can a computer perceive things the way we do? I believe so, a computer can see, hear, smell, taste, touch and then process that information and respond to it. The big question is, can a computer have wisdom? Can it quickly learn from it's "mistakes". Can it ponder over everyday issues? Can it reason out philisophical, polticial and social issues or are these beyond its simple logic?

Intelligence is not merely the wealth of knowledge but the sum of perception, wisdom, and knowledge.
June 27, 2003 6:17:53 PM

Quote:
Haha, I've always fancied making a robot/android so advanced he'd (notice I said he not it) would be almost indistinguishable from a human (like Data). How do I define true intelligence? It's not just a vast database of information. That's knowledge, not intelligence. Read my signature. Can a computer perceive things the way we do? I believe so, a computer can see, hear, smell, taste, touch and then process that information and respond to it. The big question is, can a computer have wisdom? Can it quickly learn from it's "mistakes". Can it ponder over everyday issues? Can it reason out philisophical, polticial and social issues or are these beyond its simple logic?

I say simply, yes. It's just a matter of software.

:) 

Quote:
Intelligence is not merely the wealth of knowledge but the sum of perception, wisdom, and knowledge.

Here I would disagree. Granted, it's all semantics anyway, but as one of them pagan/newage freaks who meditates and reads really old crap written by ancient dead people, my definitions go as follow:

Knowledge = To know something. AKA the storage of data.
Intelligence = The ability of <i>how</i> to use Knowledge.
Wisdom = The ability of <i>when</i> to use Intelligence.

In other words knowledge itself is almost meaningless.
Intelligence denotes conscious thought and the perception of the usefullness of information.
Wisdom denotes the experience and perception to use intelligence effectively.

This is supported by the simple fact that some of the most intelligent people in the world lack any common sense whatsoever. **ROFL** :) 

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
June 27, 2003 7:33:25 PM

Very well put (as always), slvr_phoenix. I'd have to say I agree with you too... not just that, but I had thought of some of your earlier points as well. But this bit:
Quote:
Knowledge = To know something. AKA the storage of data.
Intelligence = The ability of how to use Knowledge.
Wisdom = The ability of when to use Intelligence.

...is interesting indeed. I'd even go as far as saying that "intelligence" is the ability to cross-reference stored knowledge and current environment for more appropriate interactions with current situations, as well as store the appropriate data from the current situation without adding garbage to memory... That would be a quite lengthy description, though, and yours is short and excellent at that. Note that true development of intelligence, by this definition, is NOT learning, but learning how to learn - essentially, rewriting the "rewriting software".

I think that most people considered "very" intelligent are those who have more than one plane of awareness as to what goes on in their minds, i.e. there are lots of regulating algorithms regulating learning. That would be representative of a person who thinks about what he thinks when he interacts with the world... and <i>that</i> would be sentient. (BTW, these things are closely related to meditation, if I'm not mistaken... interesting topic)...

Therefore, indeed, knowledge in itself is almost completely irrelevant for intelligence. It is acquired through the proper exercise of intelligence; therefore, an "efficient intelligence" wouldn't really need knowledge, but rather acquire it in a fast way.

In much the same way, that which makes intelligence so valuable is not at all knowledge. It's those higher planes of thought... the lowest of them being the simple "that hurts-don't do it" algorithm, and the highest ones being a group of control mechanisms for algorithms which rewrite the heuristic code. The higher functions of thinking are then the ones that not only protect against insanity, but also direct the whole intelligence to a better functional state.

But... I still lack a grasp of how to put wisdom in that whole picture... (maybe I'm just not wise enough! :smile: ) I'll think about it later. Gotta work a little now... I'll figure it out later...
June 27, 2003 7:47:58 PM

I don't think computers will ever approxomate a human mind. It's very hard to make something that lacks the ability to feel pain and joy react in the same ways as a human unless you manually program in all responses and by the time you finished the programming of the responses, say mimicing yourself, some of the responses you programmed in would have changed. Humans are unpredictable, we don't act logically quite frequently and we don't always do what will be to our best benefit. However I do believe artificial intelligence isn't a tremendous distance in the future... I believe that artificial intelligence isn't approximation of a human but rather of being A) Self-aware with a survival instict and B) Ability to evolve either within itself or through it's offspring.

Shadus
June 27, 2003 7:58:15 PM

Quote:
Ever since I started playing Shadowrun, I've always wanted to have a computer built into my brain. I'd love to have a conscious search engine for my memories, the ability to recall and play back memories at will, the ability to store data automatically (such as downloading a book into my brain and/or recording and encoding ninjitsu into my brain and nervous system), and the ability to run complex mathematical calculations at the speed of simple thought. :)  Put a PC in my brain, please!

Exactly what I was thinking. I, along with many others, have a horrible memory, having a nice hard drive in me would be quite nice.. throw in a nice processor, and ill be all set.

We are mostly water arent we, that gives to great watercooling!

Btw, this is one crazy thread we are writing here.
June 27, 2003 7:59:16 PM

Actually it's a real pity we don't understand the body and the way everything interconnects as well as we understand many other things in our world.

The thing I hope comes in my lifetime (and it may, but it's going to be many many years out to be sure) is some form of neural access to computers... full immersion. I want a new drug...

Edit: and before someone says it, yes I full realize what the implications of being inside full immersion are and for the benefits (access time, being able to see and do pretty much anything in a given frame, etc) I would be willing to hazard the potential negative effects.

Shadus
June 27, 2003 9:22:22 PM

It is very possible but just not with a binary system. When we make a decision do we do it so automatically? NO. There has to be an area of in between that says "sort of" or "maybe". Like say instead of using our current model you used frequencies for processing; a sort of oscilliscope processor, that would allow for an in between. Say all the determining of what frequencies do what is done by a binary device but all the true thinking is done by this "oscilliscope processor" it would be very possible. I had a few thoughts wrote down at home but I didn't know I'd be seeing a thread of this nature. Interesting? Possible? I think so. I have some theories on how to do this but not enough technical knowledge. If anyone has enough knowledge(I'm just an IT guy with a theory) e-mail me indaword@techie.com
and maybe I could share some theories
June 27, 2003 11:15:04 PM

I think there is something called fussy logic (seen it on Discovery Channel) it supposed to mean that the software will have a certain probablilty to do certain things. So that mgiht work for someone who wants to creat an "AI" that is more human like
June 28, 2003 7:22:07 PM

That's NOT what I'm talking about. Fuzzy logic is a software. My theory would involve software but would be utilizing a lot of specialized hardware. The "fuzzy" in fuzzy logic means that it gives an approximation. Like in Linux if you look at the fuzzy logic clock (this is extremely simple) it will tell you something like "the middle of the day"
June 28, 2003 8:22:41 PM

Why do you feel that specific hardware might be more helpful?

I'd imagine that properly-written software could already do it... however, specialized hardware might come in handy. I didn't quite get that oscilloscope idea, though... :frown:

For those who don't actually believe in the possibility, I'd have to leave this simple proposition:

In the most classical deterministic way of physics, you can theoretically simulate the outcome of any system, no matter its complexity, given enough understanding of the basic physical rules. As a last resort in getting functional AI, you might just simulate the behaviour of every molecule in every cell of someone's brain. If we understood the basic physical interaction mechanisms between elementary particles (i.e. electrons, protons, ...) appropriately, it should be possible to get a response from an "intelligent" system, right? Unless, of course, there's more to the brain than a collection of highly sophisticated biological nanodevices. Is it deterministic in nature? That is another issue. Or is it chaotic? Or is it just probabilistic?

Oh, and by the way, I've yet to encounter reasonable evidence to support that there <i>is</i> more to the brain than a colllection of biological nanodevices (still doesn't mean it's deterministic, though), but I get the feeling that many people might disagree...

Of course, that would just be a "brute force" method. Now that I come to think of it, an adequate and elegant AI would probably be built by three programming modules:

<b>Behavioural module</b>, which dictates the AI's <i>output</i> by processing current situation data and deciding which course of action to take (if you want fuzzy logic, that's fine) based on established algorithms which are subject to change. This module is not self-learning, but is altered by the learning module.
<b>Perception module</b>, which perceives <i>input</i> from the current situation and feeds it to the behavioural module and to the learning module, in this order.
<b>Learning module</b>, which cross-references input and output, i.e. associates specific output (AI doing something or behaving in some way) and input (consequences of behaving that way) and then appropriately changes (might just be a mild change) the behavioural module. Theoretically, this module can take all the time it needs to process output-input correlations, because sentient beings don't perceive, learn, then act; rather, they perceive, act, and then learn...

What do you guys think of that? Just a little crazy thought of mine...
June 30, 2003 7:06:42 PM

frankly, its a very interesting topic... i am just posting to bring this post to the main page in the section so that we cud have more discussion on this.

<b><font color=red>The statement below is True.</font color=red>
<A HREF="http://service.futuremark.com/compare?2k3=959979" target="_new">3DMark 03 score - 297 </A> :cool:
<font color=blue> The statement above is false.</font color=blue></b>
June 30, 2003 7:26:17 PM

Quote:
i am just posting to bring this post to the main page in the section so that we cud have more discussion on this.

In other words "bump"

I'm just wondering; when computers were first being made, the processor was designed by humans (every bus line was manually designed). Nowadays processors are made by computers (bus lines are not designed by the human brain). Can't we use the power of computers to design a program that can rewrite itself and be able to learn?

<A HREF="http://www.anandtech.com/mysystemrig.html?id=24106" target="_new">My System Rig</A>
<A HREF="http://service.futuremark.com/compare?2k3=535386" target="_new">3DMark03</A>
June 30, 2003 7:38:53 PM

by "good" I was thinking of someone that is "good" on the worldscale - it is obvious that someone such as myself could easily get whooped by a very simple computer program because I am not a particularly skilled chess-player. The other important thing to think about when thinking about these somewhat "intelligent" computer programs and systems is that so far they are simply responding to conditions and not really reasoning in any intelligent way. Given enough metal, time, and space someone could probably design a mechanical device that could beat almost anyone at chess because the current computer design seems to mostly be these more simple mechanical devices on steroids. And no one would ever say a purely mechincal device is intelligent. Using this sort of method of designing systems striving towards AI will always fail. It is flawed from the start. When I was talking about needing a newer architecture for AI to be possible I don't just mean smaller components or faster processors but I mean a completely new style of computer design that hasn't been thought of yet. Maybe some sort of quantum computer has the ability to make this happen but I don't know enough about them to really say either way. In our brains there is no "processor" or "memory" in the same way as a computer. To the best of my knowledge we have vast nets of neurons which fire electrical pulses at different rates which somehow control this completed mush that is our brain. I think the only real way that we'll ever have AI is to figure out how our own brains work - there is obviuosly something special about them (and other creatures that are intelligent) that allows for something special to happen. Current computer design was made to do things well that our brains cannot do well (lengthy calculations and the like) but for AI we need to design a machine that is more focused on working like our brains do.

"Don't question it!!!" - Err
June 30, 2003 8:36:59 PM

Quote:
The other important thing to think about when thinking about these somewhat "intelligent" computer programs and systems is that so far they are simply responding to conditions and not really reasoning in any intelligent way.

You make it sound as though humans don't respond to stimuli with a per situational thought pattern. If we aren't responding to conditions then what <i>are</i> we doing? As for even something as simple as a chess program not reasoning in 'any intelligent way', what <i>is</i> an 'intelligent way'? They have stored information on patterns of play and utilize these through cause and effect models to determine their next best course of action. How is that not intelligent? For that matter, how is that any different than how a human does it?

The reality is that humans write software, and because of this the extreme vast majority of software 'thinks' through a logical condition in the same way that humans do. We solve the problems in our mind and then write the software to solve them in the same way. If not the same way, it is often a highly related way. Ultimately most software logic <i>is</i> human logic because humans write the software.

Quote:
In our brains there is no "processor" or "memory" in the same way as a computer. To the best of my knowledge we have vast nets of neurons which fire electrical pulses at different rates which somehow control this completed mush that is our brain.

I beg to differ. New nerve pathways are formed whenever new input is stored. The more frequently a pathway is used, the quicker and more clearly the nerves along that pathway respond. This results in memory. The more often the 'memory' is recalled, the firmer it is etched and easier it is to recall the next time.

(In fact, the entire nervous system works in this manner. This is how martial artists make themselves faster and how they 'train' their bodies with non-natural 'instincts'. This is the whole purpose of a kata.)

Brain surgeons have accidentally triggered memories when operating by stimulating nerve centers of recorded memories. People commonly lose fragments of their memory whenever the nerve centers of those memories are damaged or the nerve pathways to those memories are damaged or lost.

The only real functional difference between our memory and a computer's is that a computer's memory is designed to be rewritable because it is frequently reused. Where as humans would be (and are) pretty screwed whenever we lose the content of any of our 'memory'.

And no, there is no 'processor' <i>in</i> our brains. This is because the 'processor' <i>is</i> our brain. The brain is broken down into several components. Each has a specific task. A processor is broken down into several components, each with a specific task. Just because we don't consciously 'think' like a processor doesn't mean that the respective hardwares don't function in very similar ways.

Quote:
Current computer design was made to do things well that our brains cannot do well (lengthy calculations and the like) but for AI we need to design a machine that is more focused on working like our brains do.

I won't argue that current computer design implements problem solving in a very different manner than our brains do. Computers are designed to attack a problem with monotonous repetition until a result is found, and to crunch through complex calculations with relative ease. Computers are relatively one-track minds.

Human brains are designed to multi-task. We problem solve by threading through multiple processes and discarding dead-ends faster than any computer can. In many ways this is even to our own disadvantage as often unimportant threads (idle thoughts) will eat away at our processing resources, distracting us from powering through a thought with the single-mindedness of a computer.

So in that respect there are considerable differences in the way that we 'think' compared to a computer. Computers are more efficient at powering through a single thought and we're more efficient at problem solving by multi-tasking. And this is where quantum processing can really help a computer to think more like us, because it can give multiple answers simultaneously.

That aside however, software, the ultimate arbiter of logic for a computer, can be written to circumvent the deficiencies in binary thinking and single-threaded processing. Software enables computers to solve problems like we do. Sometimes it may incur performance penalties, but it is still <i>possible</i>.

So while specialized hardware would certainly <i>help</i> AI become feasable, it is not a <i>necessity</i> for an AI. Software can fill in any gaps in hardware functionality.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
June 30, 2003 8:43:00 PM

Quote:
Actually it's a real pity we don't understand the body and the way everything interconnects as well as we understand many other things in our world.

I would agree with you, except that we still barely even understand our own world, not to mention the whole universe that it's spinning around in. ;)  It is a pity.

But then would this information make much of a difference? Should humanity suddenly unlock every aspect of science imaginable, would it still answer all of our questions? Or is 'science' just the newest form of 'religion' and are 'proofs and theorums' just our latest 'god'? Will it turn out to leave us just as empty as all of the deities and religions that have gone before it?

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
June 30, 2003 8:46:43 PM

Quote:
It is very possible but just not with a binary system. When we make a decision do we do it so automatically? NO. There has to be an area of in between that says "sort of" or "maybe". Like say instead of using our current model you used frequencies for processing; a sort of oscilliscope processor, that would allow for an in between. Say all the determining of what frequencies do what is done by a binary device but all the true thinking is done by this "oscilliscope processor" it would be very possible.

Just because at it's lowest level a computer thinks in 0s and 1s doesn't mean that it can't work with any logic higher than that. Software provides a level of abstraction that is the key to higher-order thinking. A 64-bit integer has 18,446,744,073,709,551,616 possible values. This is a much broader range than binary with many values between 0 and 18,446,744,073,709,551,615. If you wanted a grey area, a place for "sort of"s and "maybe"s, there it is. And let us not forget floating-point calculation while we're at it. ;) 

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
July 1, 2003 12:56:25 AM

Humans do respond to stimuli but not in the same way and not by the same process. A computer system of today takes an input and runs it through an algorithm for a very definite result (a random number generator could be worked in but would this constitute intelligence?? - I would say it would constitute about as much intelligence as a set of dice) People on the other hand are partially driven by direct stimuli - something is hot you pull away from it so you don't get burned. But we also have free will and a thought process which brings us into the chess situation. A person tries to win a chess game out of a desire to win and with an intention to win. A computer simply plays the game with no desire or intention. The computer is only doing what it was made to do just like a car doesn't have a desire or intention to run so I can go to the pub to have a black and tan. The systems of today have no free will in the sense that we do. One thing that is often attributed to intelligence is orginal intention/desire - that being an intention/desire that is not derived from some other intention/desire. People indeed have desires and wants that could be seen as the most basic level with other desires stemming from them that are derived. Computers on the other hand have no such original desire or intention because computers do not "want," "desire," or "intend" anything. They are simply following instructions which brings us to this when you said:

"We solve the problems in our mind and then write the software to solve them in the same way. If not the same way, it is often a highly related way. Ultimately most software logic is human logic because humans write the software."

I agree that we think up the algorithm but it is the thinking of the algorithm that is the intelligent part of the problem solving process. Anyone can simply follow instructions but it takes intelligence to come up with an answer to any problem thrown at you. Here is where you could say, "but hey, computers can write code." and I would agree but the computers can't quite do it alone. The computer can't be presented the information in just any old way. The computer still must be hand fed the data in a way that it can understand and plug it into its algorithm (which someone else wrote) and spit out some code. This is simply human intelligence again - the computer didn't have to think up anything - it was simply following instructions once again. An example that is often used as an analogy to this is the Chinese Room Problem; the situation is this:

We have an English speaking man who is in a room with an "in" slot and an "out" slot being his only connection with the outside world. Someone on the outside of the room passes in cards with questions written in Chinese on them and the man is supposed to pass out the answers, also written in Chinese out of the room. But the man has never spoken or written Chinese. He does however have a manual that will tell him how to answer every question he could ever get in Chinese and a bunch of cards with all of the Chinese characters on them that he can arrange. The manual never actually translates anything but it does tell him which characters to put where so that he can properly answer questions. Since this is a hypothetical situation we must assume that the man can look up the answers very quickly and can assemble the words in a timely manner. Now to people on the outside it would appear that a fluent Chinese speaker was inside but in reality the man on the inside doesn't know a bit of Chinese - he is simply following instructions. This is analogous to a computer in that even if a computer of today appears to be acting in an intelligent way it is just like the man in the Chinese room who is simply following his manual.

when you say,

"I beg to differ. New nerve pathways are formed whenever new input is stored. The more frequently a pathway is used, the quicker and more clearly the nerves along that pathway respond. This results in memory. The more often the 'memory' is recalled, the firmer it is etched and easier it is to recall the next time.

(In fact, the entire nervous system works in this manner. This is how martial artists make themselves faster and how they 'train' their bodies with non-natural 'instincts'. This is the whole purpose of a kata.)

Brain surgeons have accidentally triggered memories when operating by stimulating nerve centers of recorded memories. People commonly lose fragments of their memory whenever the nerve centers of those memories are damaged or the nerve pathways to those memories are damaged or lost.

The only real functional difference between our memory and a computer's is that a computer's memory is designed to be rewritable because it is frequently reused. Where as humans would be (and are) pretty screwed whenever we lose the content of any of our 'memory'."

That sounds pretty accurate - I didn't go into the making of new pathways but I still believe that the rate of fire of the pulses between neurons does have something to do with thinking. It was thought for a while that the connections where more digital - connection on or connection off - but more recent studies suggest that there is a rate of fire that is more analog in nature. I don't know exactly how all these connections work together. The human brain is still a mystery to a large extent so I can't really say too much about that. I would however not really call our brain a "processor" in the sense of a computer processor as it does our brains little justice. The ability of the human brain to fix itself when damaged and to "rewire" itself is remarkable. The architecture of our brain is most likely very different than that of a processor of today. Our "architecture" is not fixed for one thing (as you pointed out). Another thing is that we defintely do not understand how the human brain works and I would think that if our brain worked like a computer processor that we could figure it out a little better (since we thought up computer processors). While there are some similarities in the "big picture" in whats going on it is the details that make the biggest and most important difference. If we could make a cpu that worked like our brain it would truely be a great moment for science. So when I said that our brains had no processor or memory like a computer system I was more referring to the details of HOW it works not the big picture because I feel it is the "how" of it all that makes one of the biggest differences here. Which brings me to:

"Software enables computers to solve problems like we do."

I can't really agree with this. I would rewrite this as,
"Software enables computers to follow instructions that we thought up." I would bring up again that in a problem solving process there is much more than following an algorithm. There are a few different aspects of problem solving that a computer does not do:
1. figuring out a different way to look at a problem that no one else thought of.
2. figuring out what the important parts of a problem are and presenting or changing them into something understandable and useable.
3. knowing what to do these parts once you have them
4. application of all of this together.

With a computer it must be fed the data in a very specific way and it then must have some rules about knowing what is important and then it must have someone to have told it what to do with this important information once it has it. Would a computer have come up with algebra or calculus? Would a computer be able to look at nature and figure out that something like evolution might exist? I can't see a computer of today (even if it had the means to identify every kind of input that we can) being able to accomplish this. While I can see computers as being good imitators of intelligence the act isn't quite good enough yet for me to call it intelligence. Current systems can only follow instructions. While we have instructions we also have something more which is hard to describe and pin down. But to go here would enter into a whole new discussion.

"Don't question it!!!" - Err
July 1, 2003 8:22:05 PM

Quote:
Humans do respond to stimuli but not in the same way and not by the same process.

I mostly disagree. We store the causality of stimuli, like when you play with fire you get burned, when you wash your hands they get clean, etc. Over time we propagate one heck of a large cross-referencing database of causality. We learn to identify variables that affect probability to more accurately determine causality in complex situations. We just do it all without thinking of 'how' we do it. Computer software literally works in exactly the same way, except that this 'cross-referencing database of causality' is very narrow and broken down into small fragments that are hard-coded into the software's logic.

However, one could easily write software to access an actual searchable 'cross-referencing database of causality'. The same software could be designed to access this database and store a new entry every time the software encounters a cause-and-effect scenario. This would make up one of the major components of an AI because the software would literally expand and become 'smarter' at predicting outcomes each and every time that it observed causality.

Quote:
. A person tries to win a chess game out of a desire to win and with an intention to win. A computer simply plays the game with no desire or intention. The computer is only doing what it was made to do just like a car doesn't have a desire or intention to run so I can go to the pub to have a black and tan. The systems of today have no free will in the sense that we do.

I agree. The systems <i>of today</i> do not. Software however is the key. If the software says to enjoy a game of chess, then that's what it will do. It's all simply a matter of software. We currently program computers to not have desire or intention because it is the most efficient way of getting the computer to do what we want. That however does not make it the <i>only</i> way to write software.

Quote:
Anyone can simply follow instructions but it takes intelligence to come up with an answer to any problem thrown at you. Here is where you could say, "but hey, computers can write code." and I would agree but the computers can't quite do it alone. The computer can't be presented the information in just any old way. The computer still must be hand fed the data in a way that it can understand and plug it into its algorithm (which someone else wrote) and spit out some code. This is simply human intelligence again - the computer didn't have to think up anything - it was simply following instructions once again

But again, this is all assuming that an AI's software is written purely by us. Where as a true AI would have the ability to modify its own programming and write completely new software for itself. No computers at the moment don't (not can't, just don't) do it all on their own. However if someone wrote software to do so then a computer could in fact do so. It isn't impossible, it just hasn't been done.

And that's not even entirely true. There are some pieces to this software that are already written. It is just a matter of time, not a matter of possibility.

Quote:
This is analogous to a computer in that even if a computer of today appears to be acting in an intelligent way it is just like the man in the Chinese room who is simply following his manual.

Which is because the software that the computer is running was written with that intention. Again, just because this is the most efficient way to get a computer to do what we want it to do does not mean that it is the <i>only</i> way to do so.

If the computer were indeed in such a scenario, was programmed with cryptography subroutines, was running a 'cross-referencing database of causality', and one of it's primary threads was dedicated to making new entries into this database, then the computer would use cryptography to 'crack' the language and in doing so teach itself Chinese. From there on it would be capable of providing its own unique answers. Again, it's just simply a matter of software.

Quote:
It was thought for a while that the connections where more digital - connection on or connection off - but more recent studies suggest that there is a rate of fire that is more analog in nature.

Really it's simply about reducing resistance and minimizing signal noise along pathways. We've known that electricity travels better when there is less resistance and interconnects are laid out to produce less signal noise for a long time. Yet for some odd reason modern doctors have been slow to think of our own nervous system as simply paths of conductivity. I find it rather funny really since 'alternative medicine', the very 'quackery' that these modern doctors scoff at so often, had things such as this figured out for centuries and even millennia before our modern doctors have.

Quote:
The human brain is still a mystery to a large extent so I can't really say too much about that.

It is only a mystery to modern doctors. There are plenty of homeopathic healers who have much more of a clue.

Quote:
I would however not really call our brain a "processor" in the sense of a computer processor as it does our brains little justice. The ability of the human brain to fix itself when damaged and to "rewire" itself is remarkable.

But these are just properties of organic systems. Cells are designed to repair. Take away the organic nature and you simply have a chemically-powered electrical computer with a huge load of write-once memory instead of an electrically-powered computer with small amounts of electro-mechanical rewritable memory. Our brains and computers are actually very similar in nature, and I think that this is a testament to the amazing concept of the computer, not an insult to our brains. The computer is actually quite an amazing device with decades of work done by the world's top engineers. It's pretty darn impressive. :) 

Quote:
Another thing is that we defintely do not understand how the human brain works and I would think that if our brain worked like a computer processor that we could figure it out a little better (since we thought up computer processors

I think that if scientists were a little more open-minded they'd know a considerable amount more about the brain than they do now. But they have to do things the hard way, so it'll take them time. Granted, their research on how the brain works go a lot faster if the brain didn't keep dying every time you poke around in it and pull it apart to really get at the inside.

And <i>that</i> is what has made it so hard for them to figure out. The brain really doesn't hold up so well once you start trying to disassemble it. You can't just put it together again or watch just one part of it work by itself because it keeps dying. It's really quite a pain in the arse to work with and not cause significant damage to. ;)  Hence research on the brain goes slowly.

Quote:
So when I said that our brains had no processor or memory like a computer system I was more referring to the details of HOW it works not the big picture because I feel it is the "how" of it all that makes one of the biggest differences here

But the 'how' the logic functions has nothing to do with the underlying hardware. The 'how' is just 'software'. If humans didn't fundamentally have a layer of software involved then neither hypnosis nor psychology could ever have any effect whatsoever on a person's thought and behaivoral patterns. The major difference is not the hardware, it's the software.

Quote:
I would bring up again that in a problem solving process there is much more than following an algorithm. There are a few different aspects of problem solving that a computer does not do:
1. figuring out a different way to look at a problem that no one else thought of.
2. figuring out what the important parts of a problem are and presenting or changing them into something understandable and useable.
3. knowing what to do these parts once you have them
4. application of all of this together.

But I have seen computers do these very things. How? Because they were programmed to. Computers <i>can</i> solve problems, and sometimes very more efficiently than the average human, so long as they are programmed to.

Ideally right now if we wanted a computer to do so, we would have to program it to do so. That does not make it impossible. That just means that we have to take the time and effort to do it.

In the future however with software that can write and refine itself humans will no longer even be required for this step.

Quote:
Would a computer have come up with algebra or calculus? Would a computer be able to look at nature and figure out that something like evolution might exist? I can't see a computer of today (even if it had the means to identify every kind of input that we can) being able to accomplish this. While I can see computers as being good imitators of intelligence the act isn't quite good enough yet for me to call it intelligence.

But again, this is simply a matter of software. The computer itself <i>could</i> do these things <i>if</i> it was programmed to. That is the barrier between an innert PC and a true AI. It is also the reason why most AIs can never be 'true' AI, because they're simply designed to fool the average human with an imitation of intelligence. Once people start writing AIs on the basis of a self-learning code-refining system with threads of 'prime directives' and searchable databases for both data storage and causality charting then AI will begin to emerge. It's just a matter of time, and how long it takes is really more just a matter of how long it takes for a paradigm shift in the minds of the people trying to write 'AI'.

Quote:
While we have instructions we also have something more which is hard to describe and pin down. But to go here would enter into a whole new discussion.

It's not that hard to pin down. If you take things to their extremes you will find that there are generally just two different points of view on what that 'something more' could be. (Three if you count 'nothing' as one of the points of view.)

1) That something more is a pattern of energy. Call it a soul, a spirit, whatever, it is ultimately just a component of pure energy which exists without matter and interacts with the physical components of our nervous system through the electrical impulses that our brains and nerves function on. What happens to this pattern of energy when it is no longer tied down to a component of matter (such as when we die) and how this pattern of energy got there in the first place is a rather debatable subject amongst the believers in this possaiblity, but the fundamental aspect of what it is always remains the same.

2) That something more is a culmination of knowledge and experience. Whether learned from the time we were born or even partially genetic, it is simply the sum of what we have learned and born witness to. Some even believe that some of this knowledge is passed on from the divine, but those folks are becoming less common as most of them switch over to group one eventually.

Ultimately I personally view it as a mixture of both, but I've never heard nor read anyone express that "something more" as anything that won't fit into one of these two categories. :) 

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
July 1, 2003 9:02:40 PM

See it any way you like but I'm far from convinced that it is possible on current hardware. I don't have the energy to continue this as it won't lead to any sort of aggreement anyway. I already spent a semester of my life debating these issues so I am fairly convinced and firm in my stance. Good day.

"Don't question it!!!" - Err
July 2, 2003 2:17:34 AM

One approach is to force your A.I. program to do the same things that brought about the human mind.

Start with a couple of basic "virus" programs that replicate themselves, but with a twist. With a percentage of the offspring in each replication there is a random deviation (including adding new code, similar to genetic mutation) in a section of the program. The ones that don't work are Darwin's losers and the ones that work reproduce. You could even make male and female programs that make offspring that are a blend of the two, with a 50% chance of getting either "parent's" section of code for each part of the program. Again survivors reproduce.

Our conscious mind and emotions come from about 3.5x10^9 years of replicators (genes) fighting for survival. Instincts develop, as programs that solve problems successfully pass the code on to their offspring (notice, NOT the solution to the problem, this would be like having your anscestors' memories which frankly I'll believe that voodoo when I see it, they just pass on the problem-solving ability). As the program becomes more and more complex it eventually reaches a point where it must contain an internal model of it's environment to test future actions, and in order to do that it has to have an accurate internal model of <i>itself</i>. Self awareness. Consciousness. It only took DNA a few billion (thousand million) years.

Credit for some of the stuff above has to go to the author and scientist Richard Dawkins, who is well known for his work in genetics and evolution theory.

<font color=blue>Build a foolproof system and they'll build a better fool.</font color=blue>
July 2, 2003 3:17:10 AM

Quote:
It's not that hard to pin down. If you take things to their extremes you will find that there are generally just two different points of view on what that 'something more' could be. (Three if you count 'nothing' as one of the points of view.)

1) That something more is a pattern of energy. Call it a soul, a spirit, whatever, it is ultimately just a component of pure energy which exists without matter and interacts with the physical components of our nervous system through the electrical impulses that our brains and nerves function on. What happens to this pattern of energy when it is no longer tied down to a component of matter (such as when we die) and how this pattern of energy got there in the first place is a rather debatable subject amongst the believers in this possaiblity, but the fundamental aspect of what it is always remains the same.

2) That something more is a culmination of knowledge and experience. Whether learned from the time we were born or even partially genetic, it is simply the sum of what we have learned and born witness to. Some even believe that some of this knowledge is passed on from the divine, but those folks are becoming less common as most of them switch over to group one eventually.

Ultimately I personally view it as a mixture of both, but I've never heard nor read anyone express that "something more" as anything that won't fit into one of these two categories. :) 

I still think that the spirit is what the brain does.

Einstein says matter and energy are two states of the same stuff, so you can't have energy coming from nowhere. I believe Einstein before religion and philosophy.

I think that "something more" is a result of our amazingly advanced brains' automatically mapping out the rest of our lives to maximize survival of our genes on a subconscious level (everybody dies, so it's more important to pass on the code than for the body to survive, hence we have altruistic behaviour towards family members, with whom we share the most genes). I think this subconscious "auto-mapping" that our brains do for us is what people talk about when they say that they are acting with "God's will" or going on a "gut" feeling. There are many future paths and our brains explore all of them as far as they can whether we want them to or not.

This is what science (and me) has to say about it and interestingly enough it is also a mixture of the two groups.

<font color=blue>Build a foolproof system and they'll build a better fool.</font color=blue>
July 2, 2003 3:56:06 AM

It always makes me laugh, we talk about making machines that can think like a human mind, when we already have billions of them and they're pretty easy to make.

"I'm a man armed with a fork in a land of soup."
July 2, 2003 6:26:10 AM

how can a computer enjoy chess? just because the software says "enjoy this" to the computer, what makes it feel the enjoyment? not possible that way



besides, AI can exist without emotions.

-------

<A HREF="http://www.quake3world.com/ubb/Forum1/HTML/001355.html" target="_new">*I hate thug gangstas*</A>
July 2, 2003 2:22:13 PM

Quote:

how can a computer enjoy chess? just because the software says "enjoy this" to the computer, what makes it feel the enjoyment? not possible that way



besides, AI can exist without emotions.

I agree that AI can exist without emotions but if true AI can be produced then surely emotions aren't hard to emulate. What is an emotion but a chemical reaction to certain stimuli or memories? Emotions compel or encourage us to things that we would not do otherwise. Emotions are also a series of physical reactions that are displayed by the individual experiencing the emotion. So why can't certain stimuli or retrieval of stored "memorizes" invoke certain subroutines that execute a series of facial reactions and affect behaviour in a certain matter. "Experience" can then allow the AI to regulate these motions. To override the behavioural changes and physical reactions whenever the logic centre deems appropriate.

Intelligence is not merely the wealth of knowledge but the sum of perception, wisdom, and knowledge.
July 2, 2003 3:55:04 PM

Quote:
See it any way you like but I'm far from convinced that it is possible on current hardware. I don't have the energy to continue this as it won't lead to any sort of aggreement anyway. I already spent a semester of my life debating these issues so I am fairly convinced and firm in my stance. Good day.

Here's a little constructive criticism Tommunist:
1) Closed minds only lead to dead ends.
2) Debating is not analogous to studying.
3) A single semester is a very short time to have spent either debating or studying anything so complex for someone as sure in their own knowledge as you are.
4) The purpose of a debate is not to reach an agreement but to provide differing points of view so that a better understanding of the whole can be reached.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
July 2, 2003 3:58:00 PM

confoundicator, that's a nice system if you don't mind leaving these things up to more or less luck. It'd probably work in the end, but I'm sure that we could come up with faster ways of achieving the ends. :) 

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
!