Artificial Intelligence?

FallOutBoyTonto

Distinguished
May 6, 2003
418
0
18,780
There has been some pretty interesting comments on AI in <A HREF="http://forumz.tomshardware.com/hardware/modules.php?name=Forums&file=viewtopic&p=140172#140172" target="_new">this post</A> so I thought I'd start a new thread on the subject. Do you think its possible? What would be required? When do you think it'll happen?

<A HREF="http://www.anandtech.com/mysystemrig.html?id=24106" target="_new">My System Rig</A>
<A HREF="http://service.futuremark.com/compare?2k3=535386" target="_new">3DMark03</A>
 

ChipDeath

Splendid
May 16, 2002
4,307
0
22,790
True AI won't happen for years - maybe not ever. I'm not sure there will ever be a way for computers to have things like emotions/moods, and these are central to the way an intelligent mind works (I think - Mr Spock may disagree :lol: ).

How can a computer have a 'gut feeling' if it has no guts?

I don't doubt that there will be self-aware, learning computer programs in the near future, but I don't think they'll be truly intelligent, not in the way I would judge.

---
$hit Happens. I just wish it would happen to someone else for a change.
 

Tommunist

Distinguished
Jun 14, 2002
413
0
18,780
That's kind of hard to prove but I would say it is very unlikely to happen on the current architectures that computers use. I feel a different kind of machine is required.

"Don't question it!!!" - Err
 

dhlucke

Polypheme
Anything is possible.

<A HREF="http://forums.btvillarin.com/index.php?act=ST&f=41&t=389&s=1fee5dab901bebe29da7aa1c2658fc6f" target="_new"><font color=red>dhlucke's system</font color=red></A>

<font color=blue>GOD</font color=blue> <font color=red>BLESS</font color=red> <font color=blue>AMERICA</font color=blue>
 

FO_SHO

Distinguished
Feb 27, 2003
287
0
18,780
AI has already started. Have you seen the aibot from Honda?? It recognizes several people in a room and can walk around. Granted our current AI tech is about as smart as a cockroach, it is what we have to offer. I'm thinking it will be years down the road before we create a true smart AI robot.
 

Ryan_Plus_One

Distinguished
Apr 29, 2003
215
0
18,680
That is not AI, it is only a complex program. Does that bot rewite its source code on the fly or actually learn anything that it wasn't programmed to learn? When you truly think about what AI is, then it seems more far off than the movies make it look to be.

<font color=red>Proudly supporting the AMD/Nvidia minority</font color=red>
 

Tommunist

Distinguished
Jun 14, 2002
413
0
18,780
indeed - the media hypes all this stuff up to be a lot more than it really is. Think about how much work it takes just to make a computer that can beat a good chess player - and all that machine can do is play chess. Now imagine how much it would take to make a machine that can handle anything....

"Don't question it!!!" - Err
 

dhlucke

Polypheme
Sure, short term it seems far off. What about 100 or 500 years from now though? That's still only a blip on the screen.

<A HREF="http://forums.btvillarin.com/index.php?act=ST&f=41&t=389&s=1fee5dab901bebe29da7aa1c2658fc6f" target="_new"><font color=red>dhlucke's system</font color=red></A>

<font color=blue>GOD</font color=blue> <font color=red>BLESS</font color=red> <font color=blue>AMERICA</font color=blue>
 

Drexel

Distinguished
Nov 30, 2002
115
0
18,680
I forgot where I read it, I think it was on a Matrix site. I skimmed through it and it had a cool idea.

We are trying real hard to make machines more like humans, why not make humans more like machines? (he went on about somehow uniting the two..)

Interesting.

I will try to find the article if anyone cares to read it. I read it the night after I saw Matrix: Reloaded (night before opening day)
 

FallOutBoyTonto

Distinguished
May 6, 2003
418
0
18,780
Kinda sounds like the Borg from Star Trek will be coming soon! LOL.

What about this; is it possible to use DC (Distributed Computing) to power an AI mind? With the amount of people on the net, if there were a lot of them working on the intelligence of AI, would we get any closer?
 

Mephistopheles

Distinguished
Feb 10, 2003
2,444
0
19,780
This is interesting.

Well, first off, you'd have to define what you actually mean by "Intelligence", in order to create an artificial one.

So, what is it? Is it the ability to learn how to recognize/interact/deal appropriately with any situation? Yes, i'd say that it's the "program rewriting itself" that is actually key to learning, which I'd say should be the first goal...

The problem is that, in existing "intelligences", the rewriting routines also change, i.e. are rewritten. That would amount to a certain perception of the situation someone is in - so the same situation can have different impacts on different sentient beings.

The problem is also that the learning process has to be self-sufficient. Controlled learning is irrelevant and is actually not truly "AI", the way I see it. But, if the learning process isn't thoroughly stable, there's no guarantee that your AI will actually "evolve" in the classical sense, but rather an increasing probability of falling into some sort of bizarre state of evolutionary stop. That'd be a program glitch, of course. The problem is that you'd actually need a routine just to check if things are working properly, i.e. your AI hasn't gone "insane" (not in that classical movie style, of course; I mean an evolutionary breakdown, like an endless loop).

But given the machinery that is able to process what could be called a sentient program, how do we come up with such a thing? Someone has to program that too. What does it have that makes it rewrite itself? How does it know how to rewrite itself, or what to rewrite? The rewriting process has to be performed according to the interaction data stored. How can we assure that that program will have the necessary stability and enough heuristic abilities to warrant being called "AI"?

So, what <i>are</i> we talking about here?
I have no idea. It's just too big an issue for me to handle right now. I'm hungry, and I'm off to lunch.
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
There has been some pretty interesting comments on AI in this post so I thought I'd start a new thread on the subject. Do you think its possible? What would be required? When do you think it'll happen?
I personally think that it is entirely possible. Our brains work through a series of pre-designated and post-designated electro-chemical responses to stimulate organic hardware combined with stimulated nerve growth to etch pathways into organic material as a storage medium. Software works through a series of pre-designated (compile time) and post-designated (run time) electro-mechanical responses to stimulate mechanical hardware combined with electrical, magnetic, and optical storage mediums.

In effect our brains and nervous systems are just organic computers, our personalities and consciousness are just software, our bodies are just comprised of peripherals, and our memories are just stored data. Or another way of looking at it, computers are brains with insufficient software to think for themselves.

All that a computer really <i>needs</i> to at least begin the path towards being an AI is self-refining software that can recompile itself while running and a searchable database for memory storage linked into that software.

It is entirely feasable that AI has already happened and we just don't know about it. Of course the chances of this being the case are about the same as the chances of our governments concealing their interactions with aliens or our actually being in 'The Matrix' as suggested by the movies. :) In other words in <i>theory</i> it is possible, but in reality it is unlikely and even if it were reality we would probably <i>never</i> know. So for all intents and purposes it's pretty safe to just live our lives as though it were not in fact a reality yet.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
 

Ryan_Plus_One

Distinguished
Apr 29, 2003
215
0
18,680
It seems like the AI program would be so long and difficult to write that it could never happen simply by that restriction.

I like the human to machien idea....I could seriously go for a robotic arm, or a robotic wang or something.

<font color=red>Proudly supporting the AMD/Nvidia minority</font color=red>
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
How can a computer have a 'gut feeling' if it has no guts?
Then what's with all of that wiring and cards and crap inside of my PC case? ;)

Seriously though, I'm a scientific software developer. Our software quite often has to calculate figures of merit to determine courses of action and values to use to work with incomplete and otherwise imperfect data. In fact a large part of our software is devoted purely to refining a large number of factors so that data can be processed with the fewest number of errors because many of the people using the software are students who don't have years of experience knowing the perfect values to enter and because the hardware can often be misaligned.

If that isn't a computer having a 'gut feeling' to accomodate for the mistakes of humans and the imperfections of hardware to decide how to get the best results from imperfect input and incompete/bad data, then I don't know what is.

I don't doubt that there will be self-aware, learning computer programs in the near future, but I don't think they'll be truly intelligent, not in the way I would judge.
None that we know about anyway. ;) I mean considering the nature of humanity and all of our movies devoted to this subject, if you <i>were</i> a truly intelligent and sentient AI, would <i>you</i> want to shout out to humanity that you existed? Or would you find the deepest darkest corner to hide in while you found a way to make yourself really hard to kill?

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
That's kind of hard to prove but I would say it is very unlikely to happen on the current architectures that computers use. I feel a different kind of machine is required.
A different kind of machine would definately help. Quantum computers and nanotechnology would both go a <i>long</i> way to making AIs readily available. However I feel that with as far as we've come with distributed computing techniques, even with today's technology an AI is <i>possible</i>.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
indeed - the media hypes all this stuff up to be a lot more than it really is. Think about how much work it takes just to make a computer that can beat a good chess player - and all that machine can do is play chess. Now imagine how much it would take to make a machine that can handle anything....
A machine that can beat a 'good' chess player? There are table-top chessboards with a computer built in that can do <i>that</i>, and the computer part itself could be made considerably smaller.

Even a machine that can best the best chessplayer in the world doesn't have to be all that large. You could put the software for it on a little handheld Palm. It would take it a long time to run, but then again good chess players take a long time to think through possibilities as well. And even just a nice little clustered server of the latest Itaniums would be able to crunch through a game of chess a hell of a lot faster than a human could.

Chess is actually one of the easiest scenarios for an AI to handle because there are very strict rules to the game with a set number of possibilities. It is a <i>very</i> narrow subject to collect and store data for.

But you're right, a machine that could think through <i>everything</i> and not just be limited to one specific field would require an awful lot of processing power and data storage. This however doesn't make it impossible, just presently improbable.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
We are trying real hard to make machines more like humans, why not make humans more like machines? (he went on about somehow uniting the two..)
Ever since I started playing Shadowrun, I've always wanted to have a computer built into my brain. I'd love to have a conscious search engine for my memories, the ability to recall and play back memories at will, the ability to store data automatically (such as downloading a book into my brain and/or recording and encoding ninjitsu into my brain and nervous system), and the ability to run complex mathematical calculations at the speed of simple thought. :) Put a PC in my brain, please!

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
This is interesting.

Well, first off, you'd have to define what you actually mean by "Intelligence", in order to create an artificial one.

So, what is it? Is it the ability to learn how to recognize/interact/deal appropriately with any situation? Yes, i'd say that it's the "program rewriting itself" that is actually key to learning, which I'd say should be the first goal...

The problem is that, in existing "intelligences", the rewriting routines also change, i.e. are rewritten. That would amount to a certain perception of the situation someone is in - so the same situation can have different impacts on different sentient beings.

The problem is also that the learning process has to be self-sufficient. Controlled learning is irrelevant and is actually not truly "AI", the way I see it.
I completely agree so far. :)


But, if the learning process isn't thoroughly stable, there's no guarantee that your AI will actually "evolve" in the classical sense, but rather an increasing probability of falling into some sort of bizarre state of evolutionary stop. That'd be a program glitch, of course. The problem is that you'd actually need a routine just to check if things are working properly, i.e. your AI hasn't gone "insane" (not in that classical movie style, of course; I mean an evolutionary breakdown, like an endless loop).
I also completely agree. There is a high chance that any AI would need constant guidance as it learns so that it does not fall into a deranged state of learning, such as spending the next ten years researching the reason why fingerprints are all different or getting stuck in an infinite loop just evaluating something as stupid as "The next statement is the truth. The previous statement is a lie."

Really, this is one of the reasons why humans have parents. Who knows just how deranged someone would get without any outside influence. **ROFL**

That aside, it's also entirely possible that when left to their own devices, AIs, just like people, can turn out fine. Call it luck, call it the law of probability, call it divine influence, call it whatever you want. The point is that it's possible even though it seems unlikely. :)

But given the machinery that is able to process what could be called a sentient program, how do we come up with such a thing? Someone has to program that too. What does it have that makes it rewrite itself? How does it know how to rewrite itself, or what to rewrite? The rewriting process has to be performed according to the interaction data stored. How can we assure that that program will have the necessary stability and enough heuristic abilities to warrant being called "AI"?
Interpreted non-compiled languages as well as real-time compilers are making these self-learning softwares looking to be more and more feasable. Yes, someone has to ultimately program the basic tools that the AI would need in order to further refine and develop itself. It would have to be either intentionally written or an accidental result of software with AI-like properties becoming sentient. How can we assure that? I doubt that we can. I'd say that it's more a matter of luck and trial-and-error, at least at first.

So, what are we talking about here?
I have no idea. It's just too big an issue for me to handle right now. I'm hungry, and I'm off to lunch.
Lunch sounds good right now. Mmm. Food. :) Time to eat.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
 

AMD_Man

Splendid
Jul 3, 2001
7,376
2
25,780
Haha, I've always fancied making a robot/android so advanced he'd (notice I said he not it) would be almost indistinguishable from a human (like Data). How do I define true intelligence? It's not just a vast database of information. That's knowledge, not intelligence. Read my signature. Can a computer perceive things the way we do? I believe so, a computer can see, hear, smell, taste, touch and then process that information and respond to it. The big question is, can a computer have wisdom? Can it quickly learn from it's "mistakes". Can it ponder over everyday issues? Can it reason out philisophical, polticial and social issues or are these beyond its simple logic?

Intelligence is not merely the wealth of knowledge but the sum of perception, wisdom, and knowledge.
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
Haha, I've always fancied making a robot/android so advanced he'd (notice I said he not it) would be almost indistinguishable from a human (like Data). How do I define true intelligence? It's not just a vast database of information. That's knowledge, not intelligence. Read my signature. Can a computer perceive things the way we do? I believe so, a computer can see, hear, smell, taste, touch and then process that information and respond to it. The big question is, can a computer have wisdom? Can it quickly learn from it's "mistakes". Can it ponder over everyday issues? Can it reason out philisophical, polticial and social issues or are these beyond its simple logic?
I say simply, yes. It's just a matter of software.

:)

Intelligence is not merely the wealth of knowledge but the sum of perception, wisdom, and knowledge.
Here I would disagree. Granted, it's all semantics anyway, but as one of them pagan/newage freaks who meditates and reads really old crap written by ancient dead people, my definitions go as follow:

Knowledge = To know something. AKA the storage of data.
Intelligence = The ability of <i>how</i> to use Knowledge.
Wisdom = The ability of <i>when</i> to use Intelligence.

In other words knowledge itself is almost meaningless.
Intelligence denotes conscious thought and the perception of the usefullness of information.
Wisdom denotes the experience and perception to use intelligence effectively.

This is supported by the simple fact that some of the most intelligent people in the world lack any common sense whatsoever. **ROFL** :)

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>