The AI Problem, historical perspective

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

The AI Problem, historical perspective

bw-2
If we start back at the science fiction beginning of the idea, we see that
Asimov really did not anticipate the implications of AI impersonating
human intelligence.  Some of his stories did have robots/androids that
impersonated humans, but ignored the fundamentals... robots should
disclose themselves upon demand.  It should be a basic law.

This failure of vision IMO has led to many of the problems we now are
faced with, like, How to stop Robocalls?  WTF do we do when the grid goes
down and the computer won't cooperate??  Why does the bank keep saying I
have three last names?, etc..

It's off-topic, so I alopogize.  I thought some of the non-AI readers may
find it interesting.

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

John Hasler-3
I doubt that anyone working in the AI field has ever taken Asimov's
Three Laws seriously.
--
John Hasler
[hidden email]
Elmwood, WI USA

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

Nicholas Geovanis-2
In reply to this post by bw-2
I dont detect a failure of vision causing the problems you mention. I detect "AI" in the service of attaining cash inflow.
Second, i like what Feigenbaum said about AI some time ago. It is actually a deep remark: Every time we think we made an advance towards AI, it turns out that we only wrote a good program.

On Sat, Aug 24, 2019, 7:39 PM bw <[hidden email]> wrote:
If we start back at the science fiction beginning of the idea, we see that
Asimov really did not anticipate the implications of AI impersonating
human intelligence.  Some of his stories did have robots/androids that
impersonated humans, but ignored the fundamentals... robots should
disclose themselves upon demand.  It should be a basic law.

This failure of vision IMO has led to many of the problems we now are
faced with, like, How to stop Robocalls?  WTF do we do when the grid goes
down and the computer won't cooperate??  Why does the bank keep saying I
have three last names?, etc..

It's off-topic, so I alopogize.  I thought some of the non-AI readers may
find it interesting.

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

Gene Heskett-4
In reply to this post by John Hasler-3
On Saturday 24 August 2019 21:06:27 John Hasler wrote:

> I doubt that anyone working in the AI field has ever taken Asimov's
> Three Laws seriously.

And that scares the hell outta me, John.

Just like having an MBA degree. Anything you don't get caught doing is
A-OK.

So its my job as chief operator at a tv station to see to it the promo's
can't be construed as payola. I've stopped and destroyed several
instances of that.

GSM's are often MBA's. And would like to think they can call the owner
and have me fired for insubordination. But their toast allways lands
jelly side down. Thats led to quite a bit of mirth at their expense.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

John Hasler-3
In reply to this post by Nicholas Geovanis-2
There was a time when it was generally accepted that computers playing
chess at the grandmaster level would be proof of strong AI.  Every time
we think we've made an advance towards AI, it turns out that AI is whatever
it is that computers can't do yet.
--
John Hasler
[hidden email]
Elmwood, WI USA

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

John Hasler-3
In reply to this post by Gene Heskett-4
I wrote:
> I doubt that anyone working in the AI field has ever taken Asimov's
> Three Laws seriously.

Gene writes:
> And that scares the hell outta me, John.

That's not what I mean.  The Three Laws are statements of moral
principles. As such they make a whole raft of implicit assumptions that
you don't notice because you *are* a human being. They become vague and
contradictory when you try to reduce them to logic.  People designing
robots should think about the subject matter of the laws (but that's
just morality: nothing in particular to do with robots) but it's
impossible to implement the Three Laws in software.

They were great stories, but they really have no bearing on actual AI
research.
--
John Hasler
[hidden email]
Elmwood, WI USA

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

celejar
In reply to this post by John Hasler-3
On Sat, 24 Aug 2019 20:35:46 -0500
John Hasler <[hidden email]> wrote:

> There was a time when it was generally accepted that computers playing
> chess at the grandmaster level would be proof of strong AI.  Every time
> we think we've made an advance towards AI, it turns out that AI is whatever
> it is that computers can't do yet.

http://nomodes.com/Larry_Tesler_Consulting/Adages_and_Coinages.html
https://en.wikipedia.org/wiki/AI_effect

Celejar

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

Gene Heskett-4
In reply to this post by John Hasler-3
On Saturday 24 August 2019 22:05:33 John Hasler wrote:

> I wrote:
> > I doubt that anyone working in the AI field has ever taken Asimov's
> > Three Laws seriously.
>
> Gene writes:
> > And that scares the hell outta me, John.
>
> That's not what I mean.  The Three Laws are statements of moral
> principles. As such they make a whole raft of implicit assumptions
> that you don't notice because you *are* a human being. They become
> vague and contradictory when you try to reduce them to logic.  People
> designing robots should think about the subject matter of the laws
> (but that's just morality: nothing in particular to do with robots)
> but it's impossible to implement the Three Laws in software.
>
> They were great stories, but they really have no bearing on actual AI
> research.

But they should. Variations of the Hippocratic oath. First, do no harm.  
And they've already done lots of harm, sucking in the almighty dollar.  
Its far more important to the purveyer's than any good that has accrued
by applying AI.

But in a way, because my time and what little influence I may have, is
surely drawing to a close, its apparent I won't be witness to the end
result. It will be whatever it will be, without me.

Otoh, John, its been one hell of a ride, not always enjoyable, but as
interesting as can be. I've been places, and done things that very few
can claim, and if I could replay it from 1934 again, I'd not change a
heck of a lot. :-)

Take care.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

tomas@tuxteam.de
In reply to this post by John Hasler-3
On Sat, Aug 24, 2019 at 09:05:33PM -0500, John Hasler wrote:
> I wrote:
> > I doubt that anyone working in the AI field has ever taken Asimov's
> > Three Laws seriously.
>
> Gene writes:
> > And that scares the hell outta me, John.
>
> That's not what I mean.  The Three Laws are statements of moral
> principles [...]

That's a subtle trojan exploiting our everyday's geek's deepest
vulnerability:

  "Morals are messy and human: ergo they don't exist. So Tom,
   implement that weaponized AI as I told you already"

This vulnerability (and this kind of exploit) is much older than
the term AI (which is already old, measured in bubbleconomy years).

Cheers
-- t

signature.asc (205 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

John Hasler-3
I wrote:
> That's not what I mean.  The Three Laws are statements of moral
> principles [...]

t writes:

> That's a subtle trojan exploiting our everyday's geek's deepest
> vulnerability:

>  "Morals are messy and human: ergo they don't exist. So Tom,
>   implement that weaponized AI as I told you already"

No it isn't.  It's a statement of fact.  Those are ok moral principles
for robot designers[1].  You can design a machine morally.  You cannot
design morality into a machine until you reduce morality to an
algorithm.  The Three Laws aren't that.  They aren't even close.


[1] I prefer "A robot should do its job and not hurt anyone."
--
John Hasler
[hidden email]
Elmwood, WI USA

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

Joe Rowan
On Sun, 25 Aug 2019 07:26:12 -0500
John Hasler <[hidden email]> wrote:


>
>
> [1] I prefer "A robot should do its job and not hurt anyone."

The elephant in the room being in the definition of 'hurt'.

https://www.zerohedge.com/news/2019-08-21/youtube-banning-robot-fighting-videos-over-animal-cruelty

--
Joe

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

The Wanderer
On 2019-08-25 at 09:19, Joe wrote:

> On Sun, 25 Aug 2019 07:26:12 -0500 John Hasler <[hidden email]>
> wrote:

>> [1] I prefer "A robot should do its job and not hurt anyone."
>
> The elephant in the room being in the definition of 'hurt'.
>
> https://www.zerohedge.com/news/2019-08-21/youtube-banning-robot-fighting-videos-over-animal-cruelty

Not to mention: I'm fairly sure the software which runs autonomous,
non-piloted drones would qualify as AI for the purposes of the Three
Laws, at least as much as most things we're doing at current technology
levels would, and some of those are intentionally designed *to* hurt
people. As long as it's the "right" people.

It seems clear to me that when Asimov formulated the Three Laws, he
either failed to account for the possibility of legitimate cases for
robots injuring or otherwise harming humans (war, law enforcement,
private security, ...), or - and I think this is the more likely
scenario - was specifically trying to disallow any of those things from
ever being considered legitimate to have a robot do, either out of
philosophical objections or out of concern for the consequences which
could arise (in a robot-uprising sense, if nothing else) if that door
were once opened even a crack.

I might find any arguments to the contrary to be interesting.

--
   The Wanderer

The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man.         -- George Bernard Shaw


signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

John Hasler-3
In reply to this post by Joe Rowan
I wrote:
> [1] I prefer "A robot should do its job and not hurt anyone."

Joe writes:
>The elephant in the room being in the definition of 'hurt'.

No more so than "injure" or "harm".
--
John Hasler
[hidden email]
Elmwood, WI USA

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

Charles Curley
In reply to this post by The Wanderer
On Sun, 25 Aug 2019 09:37:32 -0400
The Wanderer <[hidden email]> wrote:

> It seems clear to me that when Asimov formulated the Three Laws, he
> either failed to account for the possibility of legitimate cases for
> robots injuring or otherwise harming humans (war, law enforcement,
> private security, ...), or - and I think this is the more likely
> scenario - was specifically trying to disallow any of those things
> from ever being considered legitimate to have a robot do, either out
> of philosophical objections or out of concern for the consequences
> which could arise (in a robot-uprising sense, if nothing else) if
> that door were once opened even a crack.

On the other tentacle, the Good Doctor was well aware of, and got a lot
of good stories out of, the problems associated with the Three Laws.

"By the Asimov who made you,
you're a better man than I, Hunk a Tin."

-- Randall Garrett

--
"When we talk of civilization, we are too apt to limit the meaning of
the word to its mere embellishments, such as arts and sciences; but
the true distinction between it and barbarism is, that the one
presents a state of society under the protection of just and
well-administered law, and the other is left to the chance government
of brute force."
- The Rev. James White, Eighteen Christian Centuries, 1889
Key fingerprint = 38DD CE9F 9725 42DD E29A  EB11 7514 6D37 A332 10CB
https://charlescurley.com

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

John Hasler-3
In reply to this post by The Wanderer
The Wanderer writes:
> I might find any arguments to the contrary to be interesting.

I think he was just writing a story.
--
John Hasler
[hidden email]
Elmwood, WI USA

Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

deloptes-2
John Hasler wrote:

>> I might find any arguments to the contrary to be interesting.
>
> I think he was just writing a story.

It is not just a story - there is deep philosophy behind. This is a
philosophy that is being forgotten now days.

I am afraid modern children will have hard time understanding his point of
view.



Reply | Threaded
Open this post in threaded view
|

Re: The AI Problem, historical perspective

Gene Heskett-4
On Sunday 25 August 2019 10:50:36 deloptes wrote:

> John Hasler wrote:
> >> I might find any arguments to the contrary to be interesting.
> >
> > I think he was just writing a story.
>
> It is not just a story - there is deep philosophy behind. This is a
> philosophy that is being forgotten now days.
>
> I am afraid modern children will have hard time understanding his
> point of view.

I'd have to violently agree. And I'd place the blame squarely on the
libtards that have removed the right v wrong teachings from the
classrooms.

The currant state of robotics leaves only a probably broken switch
difference between the robot sent into a burning building to find and
rescue survivors, and the armed version on the same mobility frame sent
into a terrorist occupied building to disable any survivors.

And we seem to be making far more technical progress at the search and
kill versions.  That doesn't seem right.  But the place to start fixing
that is the classroom. Years before that student is hired to write that
code.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>