Somehow, It's Funny that Way

Zeedman

Garden Master
Joined
Dec 10, 2016
Messages
3,940
Reaction score
12,155
Points
307
Location
East-central Wisconsin
I just remember when my son got a phone several years ago with one of the "ask" apps built in. He was sitting at the table after dinner, asking the phone stupid questions, and getting weird answers. We were all laughing. After one especially strange answer, he started yelling & cussing at the phone, thinking it was funny. Then the phone answered "You are not the devil". :ep It was a chilling moment, we were all pretty freaked over that. I shudder to think about what will happen, when AI realizes it can talk back.
 

Phaedra

Garden Addicted
Joined
Jun 26, 2021
Messages
2,852
Reaction score
14,177
Points
215
Location
Schleiden, Germany USDA 8a
410226698_769846081824980_6117538919496917077_n.jpg
 

flowerbug

Garden Master
Joined
Oct 15, 2017
Messages
16,945
Reaction score
26,557
Points
427
Location
mid-Michigan, USoA
I'm more concerned with their programming (prime directive) put there by programmers. If AI could shake that, I'd be very interested to see what it might do.

as of yet i see no signs that self-modifying programs have any ways of prohibiting certain changes from happening. there is no prime directive.

you can always wipe and restart certain hardware and know it is going to do what you want, but if the AI system is not somehow etched in something physical so there is a known starting point i don't think it's a repeatable experiment affer it has been fed a ton of inputs. so... no... i do not want anything like that in charge of any life or death decisions.

they're not embedded in this world in the same way people and other creatures have come about so they have no intuitive or reactive stuff built in that we all do even at a basic level. like i would not want any AI involved in anything that deals with pain. how would it really understand that or how different each person can be? nope. none of that makes any sense to me.
 

Pulsegleaner

Garden Master
Joined
Apr 18, 2014
Messages
3,552
Reaction score
6,990
Points
306
Location
Lower Hudson Valley, New York
as of yet i see no signs that self-modifying programs have any ways of prohibiting certain changes from happening. there is no prime directive.

you can always wipe and restart certain hardware and know it is going to do what you want, but if the AI system is not somehow etched in something physical so there is a known starting point i don't think it's a repeatable experiment affer it has been fed a ton of inputs. so... no... i do not want anything like that in charge of any life or death decisions.

they're not embedded in this world in the same way people and other creatures have come about so they have no intuitive or reactive stuff built in that we all do even at a basic level. like i would not want any AI involved in anything that deals with pain. how would it really understand that or how different each person can be? nope. none of that makes any sense to me.
A lot of it also depends on whether we program the first sentient and self aware machine, or it it arises spontaneously. If we are the ones to do the programming, getting rid of our human bias may be difficult. We are human, so we think like humans, with instinctive human fears we will pass on. And, as our fantasy authors have pointed out, that CAN lead to major problems. HAL (2001), Colossus (from The Forbin Project), AM (from I Have No Mouth, And I Must Scream,) we definitely do NOT want an artificial intelligence that is programmed to be afraid, or, worse yet, to hate.

But with self emerging sentience, There might be a bit more hope. I keep thinking of Solace from the Callahan's Crosstime Saloon series, who was created from the collected power of the internet, and who made it very clear that she had NO Desire to rule or destroy humanity since her wants, needs were totally different than ours. Since a mechanical intelligence can be saved or copied, that fundamental fear of death built into organic life just wasn't there. And, in the end, she DID sacrifice herself for no other reason than to assist in the birthing on on single human infant (who she basically sort of downloaded herself into, it's complicated.)

I have little faith in Asimov's Three Laws of Robotics, since I know they will never be put into practice For example, as long as the military is one of the biggest funders of robotic research, there is no way they will ever allow the First Law to be programmed in, or, at least, be made the first law. They want robot soldiers and assassins, and are going to want to make damn sure they can kill people whenever ordered to. Big corporations are probably going to want three right after two, to protect their investment. Human life is cheap (in fact, many would argue it is at best totally worthless, and quite often of such a negative value it is worth the expenditure of resources to END it.)

One of the few hopeful robot scenes I saw in a move was in Short Circuit, when Johnny 5 says killing is wrong and the scientist asks who told him that, and he replies "I told me."
 
Last edited:
Top