A. I. Robots.

Started by Raven, November 08, 2023, 04:53:10 PM

« previous - next »

Raven

Well I was against having them in the work place as I felt they would eventually take away our jobs, but here's another good reason for not letting them in. :wtf:
I think we are in far to much of a hurry with technology and should slow down.  :shocked:

BBC News - Man crushed to death by robot in South Korea
https://www.bbc.co.uk/news/world-asia-67354709

Michael Rolls

nasty - have to admit I don't really understand much about AI
Thank you for the days, the days you gave me
[email protected]

Alex

I don't know much about it either and certainly don't understand the panic over AI

klondike

That one looks more like automation than AI to me.

You are right about replacing people though. More and more jobs will eventually be done by machines. That process started a couple of hundred years or more back.

Raven

The more jobs they do, the more people unemployed, less available jobs for people, the more on the benefits roundabout.
The more the government will scream about all the money they need to pay out.
Simple answer, jobs are for people not robots.  :angry:

dextrous63

What about those jobs involving designing, building and maintaining robots?  

Raven

Personally I would get shot of the bloody things. BUT, I know most won't think like that.
I don't like the thought of them, and I'll never trust them.

According to a recent study, half of all AI researchers believe there is at least a 10 per cent chance of AI causing human extinction, with many warning that robots could be capable of human-like goals such as attaining high political office, starting new religions or even playing God.

dextrous63

I was going to rubbish that thought Raven, but upon reflection, since we keep on seeing half witted greedy morons gaining positions of high office, then AI creations probably stand a good chance after all.

Raven

Quote from: dextrous63 on November 08, 2023, 11:11:03 PMI was going to rubbish that thought Raven, but upon reflection, since we keep on seeing half witted greedy morons gaining positions of high office, then AI creations probably stand a good chance after all.

Not my thoughts Dex, just an extract from Google. :busted:

Alex

"  there is at least a 10 per cent chance of AI causing human extinction, with many warning that robots could be capable of human-like goals such as attaining high political office, starting new religions or even playing God."

That's the part I don't understand, it sounds crazy.   :rolleyes: 

dextrous63

I'd imagine it's down to how much autonomy and "free thought" one would give to AI, and whether a failsafe check is indelibly inserted into the programming.

Klondy will perhaps give us a layman's explanation of the Turing test😉

klondike

#11
I'd say that with two quite major conflicts involved each involving a nuclear armed country one of which may find its back to the wall if external forces cause it to give up before defeating one of its adversaries and the other with a psychotic megolomaniac facing eventual defeat or economic meltdown we maybe already face a 10% chance of extinction without the necessity for real life terminators.

November 09, 2023, 09:47:56 AM
Quote from: dextrous63 on November 09, 2023, 07:33:09 AMI'd imagine it's down to how much autonomy and "free thought" one would give to AI, and whether a failsafe check is indelibly inserted into the programming.

Klondy will perhaps give us a layman's explanation of the Turing test😉
I imagine that something like Asimov's three laws of robotics woud be built in. Plus maybe QD plugs  :grin:

peterv6153

In 1942 Isaac Asimov defined his "Laws of robotics" as follows
 
1.      A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.      A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3.      A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
 
Perhaps if all robots/AI machines were designed and built to follow these laws there would be no accidents.
 

Michael Rolls

there was also a zeroth Law on the lines:-

  A robot may not injure mankind or, through inaction, allow mankind to come to harm. Can't remember in which book it appeared, but long after the first three
Thank you for the days, the days you gave me
[email protected]

klondike

That one was invented by a couple of robots themselves in one or other of the books that were written much later to link much of his SF into a consistent whole.