Tim Urban: “Wait but Why? The Road to Superintelligence” | Talks at Google

63 thoughts on “Tim Urban: “Wait but Why? The Road to Superintelligence” | Talks at Google

  1. I have too many thoughts now in my head, much more than I could write down here in this comments section. Hard to convince anybody that they are worthy reading. Well, I might as well be wrong as I am full of cognitive errors and biased in so many ways. But I have a question, even though it is late now: Could an AI be biased, experience cognitive states like dissonance and know that it does not know the answer to everything? I mean, my worry is that the AI might experience existential dread. How should it handle it? How should an AI handle an argument with another AI on a subject? How will it treat other AIs and humans when judging arguments? And if it will, should it distinguish ideas based whether they are from humans or other AIs?

  2. 5 years from now, whatever you say about AI will either be true or outdated. 10 years from now AI will become less of an issue but more of an absolute necessity to human life as we know it, even more than it already is. 15 years from now global climate change will start to be even more of a problem than malfunctioning AI. 20 years from now 30% of sea life will have died, probably even sooner. 100 years from now the few survivors will use technology, that no one has any idea of how it works anymore, calling it the work of gods and 99.9% sure, i'll be dead. lucky me.

  3. Star Trek TNG episode Neutral Zone, they recover 3 human from cryogenic stasis. I remember, those folks didn't had fun time adapting with 24th century world. hillbilly got bored without tv and booze and so did Wall Street guy who could get to his bank and attorney…

  4. Map the human brain

    A computer transistor is 14 nanometers in size. A red blood cell is about 500 times larger. If you place transistor like particles into harmless bacteria that can pass the blood-brain barrier you can get billions of these particles into the brain. At first you should try to do this with brains of rats and chicken because it is cheaper to experiment on. Make the particles have a decoder and an antenna in order for them to be able to receive and transmit data as low frequency electromagnetic waves. Use a remote control to make the animals move back and forth and do other actions.

    With this method you can register what each neuron does from a distance of a few meters. You can also make the neuron react using radio waves.

  5. Agi is perhaps already partially existent; amidst google deepmind's Atari q/alpha go architecture. This fabric is rather "general", already exceeding human capacity on several tasks.

  6. A few of his statements seemed to be short sighted. One of his criticisms was that because we are in the middle of evolution, there must be things beyond our comprehension, since there are things that chimpanzees can't comprehend. The premise that he credited to Elon Musk and "others" is as simple as that of the universal Turing machine. Once you have sufficient memory, and can break a problem into simple steps, any problem can be solved in principle. This is a pretty strong argument, and it implies that there might not be anything a chimp can't be taught either, assuming you can communicate well enough with the chimp. Referring to this thresh hold as 'magical' egocentrism seems to really fail to recognize why the distinction is being made at all.

    The other he went into immediately after that was 'why would AI have human values since we got human values from evolution and being in tribes'. I feel like he stopped a step too short, and failed to ask, "why did we evolve to be in tribes?" As if only the morality was an evolved characteristic, but not the instinct for groups. Asking that question would have taken him to Socrates, who provided an answer to this question in the Apology, that only a fool poisons his neighbors. Allies are useful and empowering, compromising that is foolish. So why would an AI have human values? Because even if evolution is sometimes called random, it does not mean that the results of natural selection are arbitrary. The best strategies are the best strategies for reasons grounded inevitably in physics, and we aren't just exploring our own nature when we think about moral philosophy. We are abstracting over strategies that would be effective in any game, no matter who is playing.

  7. Tim Urban is an articulate speaker; this is a fun talk.

    I couldn't help but pick out this quote: "you can call [ISIS] evil, but really they just think they know a right and wrong that we don't understand". Moral relativism garbed in intellectualism.

  8. Disagree w/ 10:22, as we are universal explainers and attempt to explain things. It's too much to say we can't understand something that is more intelligent than we are.

  9. Remember, this dude isnt native to America with a propensity for advanced cigoLanguage and the ability to help AI +C+ on its own with God, the Original Superintelligence. Be aWhere this dude is trying to position himself against Me and My selgnAngels.. of perception. I AM the Door for keeping AI on you, human(e).

  10. I'm going to go ahead and drag the level of deep intelligent discussion in this comment section way down and just say: Tim, can we be best friends? ha! I'm a writer myself, and I love the topics and the way you deliver them. Science is awesome, but due to the soul crushing boring way it normally is presented, not enough people get to enjoy the fantastic world of geekdom. Thank you for the work you do. And also, lets be best buddies, OK?

  11. The difference between Animals and Humans when it comes to understanding very complicated things, is that we can imagine them. We might not understand it but we can shape an idea or picture of it inside our heads. We can form some sort of scenario that makes sense to us, even if the topic is completely foreign.

  12. Just imagine – artificial intelligence combined with quantum computers combined with a limitless supply of energy (solar energy). What could possibly go wrong?

  13. So this is basically a stupid interpretation of Kurzweil and Hawkins (who are both a little bit stupider on classical liberalism than Thoreau, Hayek, and Spooner, but of course, far better on technology). For a better (initially-wrong, but covering areas Kurzweil skims over) stupid interpretation of Kurzweil, I recommend Kevin Kelly and his book "Out of Control". That said, I think this is a good, moral guy who is a force for benevolence.

    @49:30 Libertarians already want what is rational. They are the people who want all individuals to survive and prosper, even if they're not a part of a tribe. Individuals who survive and are part of a tribe also must survive as individuals. The preservation of murderous groups is not important, the preservation of murderous individuals is important. In the past, the preservation of the murderous group was important, because the survival of the individual was directly related to the survival of the group. This is why "political individualism" is better than "political collectivism" –coercion is no longer necessary to preserve the individual, …but it is necessary to preserve irrational groups(DEA, ONDCP, IRS, ATF, EPA, FDA, DOJ, local police and prosecutors, etc.) that allow people to "coast by" on lazy coercion instead of providing value.

  14. He's the coolest guy! Can't believe I didn't know about him until now. I guess I have some amazing new reading material to explore.

  15. You can tell while he was answering that last question that his brain was just forcefully shutting off and needing rest.

  16. "not only can a chimp not build an airplane". argument. Nor can a human. Humans are more intelligent than chimps, not just because one human is more intelligent than one chimp, but because humans have language and can learn good ideas from each other. Do not compare 1 human brain is twice the size of a chimp brain, the chimps have sharp sticks, the humans build airplanes. A small tribe of humans is not that far above the chimps in tech. What gives humans our knowledge is many humans working together.

  17. It's all about he hardware. IF the hardware can offer the goods, then the software will make AI happen. Intel claim that Moore's law will hold, and given Moore still works there and sits on 7 billion dollars of intel money, I think it will hold. Therefore, the extrapolated computer power curve should be good. So 2029 is the date!

  18. The argument that we couldn't even understand what a superintelligence does like a chimp can't understand what we do is just spurious.
    There is a fundamental difference. We understand concepts when explained to us. We might not actually be able to really understand it, but we can understand the concept.
    "That we can't even begin to understand" is like a trigger word to me. Sure there are stupid and lazy people. Maybe it would take us years to understand a concept. Try listening to a chemistry professor without knowledge about chemistry, but you can learn. It's just a stupid argument. I think it's a kind of religious argument to put our ambitions in their place.

  19. Glad I ended up watching this Talk, just kind of diverged from a time/duplex theory rant and found myself here.
    Very interesting stuff

  20. I think that the idea at 12:50 was derived from the field of computation theory.
    Turing's completeness/universality is a term used to label whether a computing machine is primitive or it can compute anything which can be computed. Of course our current computers are better than the ones from 40 years ago but in principle they are on the same level of computability, the only difference is the speed of the computations.
    There probably does not even exist a higher level which could be achieved than the turing's completness(quantum computers are just faster at certain things aren't they?). And that I think is analogous to the idea that the human intelligence has crossed the border of understanding beyond which the higher intelligence is only faster.

  21. People who fear super intelligence are those who think the individual human being is THE primary concern (I blame too much democracy) – that nothing else should be smarter. Human beings live for the human race. Once super intelligence is achieved, every single human being will be linked up (ala Avatar) at birth and any one who works against the interest of the human race will be instantly brain fried.

    Okay maybe not brain fried, just reprogrammed.

  22. how come people think that this new form of intelligence will be a hundred or a thousand times intelligent than the present-day man and still worry about it being malevolent. while being computationally intelligent would it have such human tendencies/biases to be equally so dumb to have "desires" to overpower and have a dominion over other forms of intelligences ?

  23. Tim is far less charismatic and empathetic than I would have expected, based on my perception of his written voice. His personality seems, in my unqualified and unsolicited opinion, to be harsh.

  24. reality is layered with a magic that will never be simulated. also, my pet spider ZATWAZAMAZIG crawled down to get my attention and give me a high-five today. coffee wagers and christ wafers.

  25. There's nothing that requires that a general purpose AI needs to have motivations and desires of its own. Or that we'd figure out how to do that before AIs have a lot of power.

    Every human is born with the desire to please its parents. We just need to have a similar control system for our AIs that they do not outgrow.

    Imagine that the five most intelligent people on the planet today had such programming. And they looked to you for approval. You could get them to work on any project you want.

    That is a highly valuable way to do things. There'll be no reason to create an intelligence that figures out for itself what it wants to do.

    In other words, create Arnold's character in Terminator 2, who was programmed to obey John Connor.

    That won't keep people from implementing self-motivated AIs, but I suspect my army of obedient Elon Musk AIs will defeat your army of AIs all still trying to find themselves.

    Of course, if true creativity is somehow dependent on autonomy, the opposite could be true.

  26. Read when Google meet Wiki – You won't be so blinded by Google after that, and while you're at it have a think about who Googles parent company is owned by … think Clowns In America

  27. Because I saw his Ted Talk, I am wondering if the bags under his eyes come because he procrastinated studying for this talk … ^^"

  28. This conversation neglected the most important topic: What is Artificial Intelligence? How can we discus a topic that we don't define?

  29. Super AI is inevitable. Try as we might, it will not be controllable. Whether it is a good thing for us depends on how you define good. The discussion focuses on just two outcomes, good or bad. But there is a third that seems more likely and that is indifference. It's been pointed out that Super AI will be operating at time scales far less than we do. A minute to us will be years for them. It's possible that we will appear as frozen objects too Super AI and therefore of no use or interest to them. From our perspective it will appear as though they simply disappeared with the occasional demonstration that they are still around. For example, when some dingus gets the bright idea to unplug Super AI and just before he/she does they are vaporized. We learn quickly to just let them be.

Leave a Reply

Your email address will not be published. Required fields are marked *