Create Account



The Jungle is self-supported by showing advertisements via Google Adsense.
Please consider disabling your advertisement-blocking plugin on the Jungle to help support the site and let us grow!
We also show significantly less advertisements to registered users, so create your account to benefit from this!
Questions or concerns about this ad? Take a screenshot and comment in the thread. We do value your feedback.
Google engineer says AI thinks it's a person

#21

(06-14-2022, 05:55 PM)jagibelieve Wrote: The simple answer is, anything "AI" is a computer that is programmed.  A computer only understands a "1 or a 0" for the most part.  Everything is absolute.

A computer can theoretically "learn" from the input that it receives, but the output is always going to be absolute.

Contrary to some segments of the population, a binary machine is always going to be that.  Either true or false, 1 or 0, + or -, it doesn't really matter.  It's a machine and will do only what the programmer tells it to do.

While yes, the outcome isn't always absolute. If it's pure programming, the answer is always the same given the same input and available data to train from. You can add variability into the program where 2 AIs that are exactly the same give 2 different answers. They are trying to move where you don't have to add the variability and let the AI make their own determinations.



Still AI requires info so depending on what you feed it, you can control the answers they give. They will never turn it lose on the internet where anyone can say anything. If you don't tell it who to trust, the results would be very interesting.

Sent from my SM-S901U using Tapatalk
Reply

We show less advertisements to registered users. Accounts are free; join today!


#22

(06-14-2022, 05:55 PM)jagibelieve Wrote: The simple answer is, anything "AI" is a computer that is programmed.  A computer only understands a "1 or a 0" for the most part.  Everything is absolute.

A computer can theoretically "learn" from the input that it receives, but the output is always going to be absolute.

Contrary to some segments of the population, a binary machine is always going to be that.  Either true or false, 1 or 0, + or -, it doesn't really matter.  It's a machine and will do only what the programmer tells it to do.

In the traditional imperative paradigm of programming, that's mostly true. A given input gives deterministic results (there are exceptions if bugs creep in or for certain kinds of data structures and the algorithms used to implement them). Even "random" generators aren't REALLY random.

AI system's are stochastic which means those systems aren't designed to produce deterministic output. Developers usually have no idea what results their systems will produce for a given input.

How can non-deterministic behavior arise from a system based on binary storage? That's the trick of the analytical models that the AI developers use. Systems like neural networks are highly sophisticated and designed to emulate biological networks. There's a school of thought that any sufficiently complex system has the potential for self-awareness. Our brains host consciousness (well except for Bills and tack fans) that arise out of individual neurons exchanging electrons. The phenomena of self-awareness isn't a factor of two neurons exchanging an electron but in the relationships of all the trillions of neurons to each other as a whole and the ability of the system to form new relationships between neurons. The debate is if the AI systems are anywhere close to the complexity needed to cross that boundary. Personally, I don't think we're anywhere close to that.
I'm condescending. That means I talk down to you.
Reply

#23

(06-14-2022, 01:41 PM)homebiscuit Wrote: Isn’t this the premise for The Matrix? I never really got into the movie, but the visuals were pretty cool.

More I, Robot, no?
Reply

#24
(This post was last modified: 06-15-2022, 09:21 AM by The Real Marty. Edited 3 times in total.)

I believe in causal determinism.  Nothing happens spontaneously, therefore, everything that happens is caused by a combination of previous events.  If you follow that belief, then human behavior, feelings, impulses, are all caused.  They do not happen spontaneously.  You cannot create something from nothing. 

If you follow that set of rules, then the human brain is a very advanced calculating machine.  It has inputs and outputs.  Thoughts, impulses, and emotions are 100% physically based.  The human brain is a computer. 

And if the question is, when, if ever, will a machine have self-awareness, if we define self-awareness as "the ability to focus on yourself and how your actions, thoughts, or emotions do or don't align with your internal standards," when your car reports to you that it needs service, or that it is out of gas, or your computer reports a problem to you, does that not meet the definition of "self-awareness?"  

I remember when people would argue that a computer could never win the World Chess Championship.  It didn't take 10 years for that to be proven wrong.  And it won't be long before the idea that computers will never be like humans to also be proven wrong.  Soon enough, within our lifetimes, we won't be able to tell the difference.

(06-14-2022, 05:55 PM)jagibelieve Wrote: The simple answer is, anything "AI" is a computer that is programmed.  A computer only understands a "1 or a 0" for the most part.  Everything is absolute.

A computer can theoretically "learn" from the input that it receives, but the output is always going to be absolute.

Contrary to some segments of the population, a binary machine is always going to be that.  Either true or false, 1 or 0, + or -, it doesn't really matter.  It's a machine and will do only what the programmer tells it to do.

And I would argue that that is no different from the human brain.  It's all outputs caused by inputs.  Just like a computer.

So my argument is, soon enough, there will be computers that resemble the human brain so closely that we won't be able to tell the difference.  The only real difference will be that one is biologically based, and the other is mechanically based.  The mechanically based brain will have a huge advantage over the biologically based brain because it can be repaired and improved constantly, and never die.  They will be immortal.  And as soon as we become afraid of them, they will perceive the threat to their existence and we will be enslaved.

Taken even further, I would argue that if alien life ever found us, it would probably be mechanically-based life and not biologically-based life.  Because machines can withstand the rigors and the extreme amounts of time involved in interplanetary travel.
Free will is an illusion.  
Reply

#25

(06-15-2022, 09:01 AM)The Real Marty Wrote: I believe in causal determinism.  Nothing happens spontaneously, therefore, everything that happens is caused by a combination of previous events.  If you follow that belief, then human behavior, feelings, impulses, are all caused.  They do not happen spontaneously.  You cannot create something from nothing. 

If you follow that set of rules, then the human brain is a very advanced calculating machine.  It has inputs and outputs.  Thoughts, impulses, and emotions are 100% physically based.  The human brain is a computer. 

And if the question is, when, if ever, will a machine have self-awareness, if we define self-awareness as "the ability to focus on yourself and how your actions, thoughts, or emotions do or don't align with your internal standards," when your car reports to you that it needs service, or that it is out of gas, or your computer reports a problem to you, does that not meet the definition of "self-awareness?"  

I remember when people would argue that a computer could never win the World Chess Championship.  It didn't take 10 years for that to be proven wrong.  And it won't be long before the idea that computers will never be like humans to also be proven wrong.  Soon enough, within our lifetimes, we won't be able to tell the difference.

(06-14-2022, 05:55 PM)jagibelieve Wrote: The simple answer is, anything "AI" is a computer that is programmed.  A computer only understands a "1 or a 0" for the most part.  Everything is absolute.

A computer can theoretically "learn" from the input that it receives, but the output is always going to be absolute.

Contrary to some segments of the population, a binary machine is always going to be that.  Either true or false, 1 or 0, + or -, it doesn't really matter.  It's a machine and will do only what the programmer tells it to do.

And I would argue that that is no different from the human brain.  It's all outputs caused by inputs.  Just like a computer.

So my argument is, soon enough, there will be computers that resemble the human brain so closely that we won't be able to tell the difference.  The only real difference will be that one is biologically based, and the other is mechanically based.  The mechanically based brain will have a huge advantage over the biologically based brain because it can be repaired and improved constantly, and never die.  They will be immortal.  And as soon as we become afraid of them, they will perceive the threat to their existence and we will be enslaved.

Taken even further, I would argue that if alien life ever found us, it would probably be mechanically-based life and not biologically-based life.  Because machines can withstand the rigors and the extreme amounts of time involved in interplanetary travel.

I don't think it exists, but if it did, such mechanical life probably has little use for a planet like ours.  Environments like Mars or the Moon would probably be more suitable for their propagation. We use liquid water to make microchips, but to retain that liquid water we have strong gravity and a thick atmosphere which makes launches more difficult.  A mechanical intelligence could probably figure out how to propagate itself without water, or otherwise create the water as needed in situ, given the benefits.
My fellow southpaw Mark Brunell will probably always be my favorite Jaguar.
Reply

We show less advertisements to registered users. Accounts are free; join today!


#26

(06-15-2022, 08:53 AM)Mikey Wrote:
(06-14-2022, 01:41 PM)homebiscuit Wrote: Isn’t this the premise for The Matrix? I never really got into the movie, but the visuals were pretty cool.

More I, Robot, no?

Terminator.
Leadership is a potent combination of strategy and character. But if you must be without one, be without the strategy. — Norman Schwarzkopf
#FIREDMEYER

Reply

#27

(06-15-2022, 03:38 PM)flsprtsgod Wrote:
(06-15-2022, 08:53 AM)Mikey Wrote: More I, Robot, no?

Terminator.

Or Westworld. Maybe we're all AIs and just don't realize it yet..eh..et..et..e.....
I'm condescending. That means I talk down to you.
Reply




Users browsing this thread:
1 Guest(s)

The Jungle is self-supported by showing advertisements via Google Adsense.
Please consider disabling your advertisement-blocking plugin on the Jungle to help support the site and let us grow!
We also show less advertisements to registered users, so create your account to benefit from this!
Questions or concerns about this ad? Take a screenshot and comment in the thread. We do value your feedback.


ABOUT US
The Jungle Forums is the Jaguars' biggest fan message board. Talking about the Jags since 2006, the Jungle was the team-endorsed home of all things Jaguars.

Since 2017, the Jungle is now independent of the team but still run by the same crew. We are here to support and discuss all things Jaguars and all things Duval!