Follow TV Tropes

Following

Discussion Main / InstantAIJustAddWater

Go To

You will be notified by PM when someone responds to your discussion
Type the word in the image. This goes away if you get known.
If you can't read this one, hit reload for the page.
The next one might be easier to see.
PatBerry Since: Oct, 2012
Mar 10th 2014 at 11:30:00 AM •••

This Star Trek item seems inaccurate, to say the least:

  • Between holodeck malfunctions and almost every known humanlike hologram such as The Doctor expanding their horizons over time, for good or for ill, it seems sentience is what will happen to any hologram left on too long. Even moreso than robots (compare Data or original Trek's evil AI MOT Ws). They also become very humanlike, for some reason. This happens throughout the Trek Verse. The questionable morality of using holograms as sexbots or tackling dummies in light of this is never discussed, though the treatment of individual holograms who have achieved sentience frequently is.

Not even close to true. I'm aware of only four holograms in all of Star Trek that were established to be sentient: Minuet, Moriarty, the Emergency Medical Hologram, and Vic Fontaine. Of these four, two (Minuet and Vic) were intentionally designed to be sentient. One of the others (Moriarty) was unintentionally designed to be sentient when Geordi asked the holodeck to create an opponent capable of defeating Data. (Geordi was asking for sentience, but didn't realize it at the time.)

That leaves exactly one hologram (the EMH) that spontaneously became sentient (due partly to operating for long periods, and partly from being required to exceed its original design limitations). The claim that this "is what will happen to any hologram left on too long" is simply not supported by any evidence.

Edited by 75.182.67.118 Hide / Show Replies
PatBerry Since: Oct, 2012
Apr 21st 2014 at 12:43:09 AM •••

No defense of the inaccurate item has been offered, so I deleted it.

TheWealthyAardvark Since: Oct, 2010
Dec 15th 2010 at 1:40:30 PM •••

Pulled from the Real Life entry, as it was getting a little cumbersome:

  • According to some AI specialists, AIs may actually evolve this way, whenever they do. Not completely by accident, but in a way that is not directly controlled by humans, which results in an intelligence quite different from what its creators had thought would develop.
    • Unfortunately for the trope, not just any old system will develop in this way. A dynamical, learning system interacting with a causally rich environment? Yes! Your PC left on too long? Nope, sadly.
      • Unless someone makes a virus/distributed computing system that uses as much computing power as possible to process inputs in an attempt to develop sentience (like SETI@Home... How cool is AI@Home? :-P)
        • Skynet@Home?
        • Couldn't (and DIDN'T) happen.
        • Pretty much all AI experts think that they have more than enough computing power for sentience (though give them enough and they'll just cheat and emulate a human brain). The problem is using it.
          • This is somewhat dated information - a computer can nowadays tell people apart better than a human being can. This, of course, doesn't change the fact that a sapient computer is still quite awhile away; it needs at least sophisticated parallel processing, quite a bit more computing power, as well as learning algorithms that emulate brain processes. For sentience to emerge spontaneously from a machine with nothing but computing power is nigh impossible, but even when all the required components are together, extensive learning period and interaction will be needed before the machine can achieve human-like reasoning capabilities.
        • Its called FreeHAL@home, and this troper is running it now.

Hide / Show Replies
Biffbiffley Since: Jan, 2001
Jul 9th 2012 at 3:43:04 PM •••

Look at this.. Pulled again, for the same reasons. (Also, we don't have A.I.s yet IRL, therefore it does not fit this trope regardless)

If you want to talk how AI might come about the forums would probably be a better place for it.

    Real Life 
  • According to some AI specialists, AIs may actually evolve this way, whenever they do. Not completely by accident, but in a way that is not directly controlled by humans, which results in an intelligence quite different from what its creators had thought would develop.
    • Unfortunately for the trope, not just any old system will develop in this way. A dynamical, learning system interacting with a causally rich environment? Yes! Your PC left on too long? Nope, sadly.
      • A sapient computer is still quite a while away; it needs at least sophisticated parallel processing, quite a bit more computing power, as well as learning algorithms that emulate brain processes. For sentience to emerge spontaneously from a machine with nothing but computing power is nigh impossible, and even when all the required components are together, an extensive learning period and interaction will be needed before the machine can achieve human-like reasoning capabilities.
      • While some think that we still need better hardware to build and run a human-like AI, that hasn't stopped organizations like FreeHAL from getting started on the software.

Edited by Biffbiffley
Nornagest Since: Jan, 2001
Apr 16th 2010 at 2:32:52 PM •••

  • (at a rough estimate, we're 1/5e+80ths of the way to having a self-contained AI, if Moore's law is any kind of yardstick)

5e+80 is a big number. A really big number. In fact, there are only somewhere in the neighborhood of 1e+80 particles in the universe, although estimates vary by several orders of magnitude.

It's difficult to say how close we actually are to creating working AI. A neural-mapping project is underway which should be up to simulating a human brain at the neuron level in ten years or so; whether or not this would qualify as artificial intelligence is a matter of interpretation.

If we're using Moore's law as a yardstick, things don't get much clearer. The human brain contains about ten billion neurons and 100 trillion synapses, by comparison with the roughly four billion transistors in a modern multicore CPU; however, transistors operate much faster. Hans Moravec, writing around the turn of the millennium, has suggested based on extrapolation from computer-vision experiments that simulating human behavior should take about 100 million MIPS of processing power, a landmark exceeded several times over by the fastest existing computers; however, his estimate is based on highly speculative simplification of neural architecture.

Short version: we don't know how long it'll take, but it's almost certainly less than a few decades.

Edited by Nornagest I will keep my soul in a place out of sight, Far off, where the pulse of it is not heard.
Top