In your brain, neurons are arranged in networks big and small. With every action, with every thought, the networks change: neurons are included or excluded, and the connections between them strengthen or fade. This process goes on all the timeâitâs happening now, as you read these wordsâand its scale is beyond imagining. You have some eighty billion neurons sharing a hundred trillion connections or more. Your skull contains a galaxyâs worth of constellations, always shifting.
Geoffrey Hinton, the computer scientist who is often called âthe godfather of A.I.,â handed me a walking stick. âYouâll need one of these,â he said. Then he headed off along a path through the woods to the shore. It wound across a shaded clearing, past a pair of sheds, and then descended by stone steps to a small dock. âItâs slippery here,â Hinton warned, as we started down.
New knowledge incorporates itself into your existing networks in the form of subtle adjustments. Sometimes theyâre temporary: if you meet a stranger at a party, his name might impress itself only briefly upon the networks in your memory. But they can also last a lifetime, if, say, that stranger becomes your spouse. Because new knowledge merges with old, what you know shapes what you learn. If someone at the party tells you about his trip to Amsterdam, the next day, at a museum, your networks may nudge you a little closer to the Vermeer. In this way, small changes create the possibility for profound transformations.
âWe had a bonfire here,â Hinton said. We were on a ledge of rock jutting out into Ontarioâs Georgian Bay, which stretches to the west into Lake Huron. Islands dotted the water; Hinton had bought this one in 2013, when he was sixty-five, after selling a three-person startup to Google for forty-four million dollars. Before that, heâd spent three decades as a computer-science professor at the University of Torontoâa leading figure in an unglamorous subfield known as neural networks, which was inspired by the way neurons are connected in the brain. Because artificial neural networks were only moderately successful at the tasks they undertookâimage categorization, speech recognition, and so onâmost researchers considered them to be at best mildly interesting, or at worst a waste of time. âOur neural nets just couldnât do anything better than a child could,â Hinton recalled. In the nineteen-eighties, when he saw âThe Terminator,â it didnât bother him that Skynet, the movieâs world-destroying A.I., was a neural net; he was pleased to see the technology portrayed as promising.
From the small depression where the fire had been, cracks in the stone, created by the heat, radiated outward. Hinton, who is tall, slim, and English, poked the spot with his stick. A scientist through and through, he is always remarking on what is happening in the physical world: the lives of animals, the flow of currents in the bay, the geology of the island. âI put a mesh of rebar under the wood, so the air could get in, and it got hot enough that the metal actually went all soft,â he said, in a wondering tone. âThatâs a real fireâsomething to be proud of!â
For decades, Hinton tinkered, building bigger neural nets structured in ingenious ways. He imagined new methods for training them and helping them improve. He recruited graduate students, convincing them that neural nets werenât a lost cause. He thought of himself as participating in a project that might come to fruition a century in the future, after he died. Meanwhile, he found himself widowed and raising two young children alone. During one particularly difficult period, when the demands of family life and research overwhelmed him, he thought that heâd contributed all he could. âI was dead in the water at forty-six,â he said. He didnât anticipate the speed with which, about a decade ago, neural-net technology would suddenly improve. Computers got faster, and neural nets, drawing on data available on the Internet, started transcribing speech, playing games, translating languages, even driving cars. Around the time Hintonâs company was acquired, an A.I. boom began, leading to the creation of systems like OpenAIâs ChatGPT and Googleâs Bard, which many believe are starting to change the world in unpredictable ways.
Hinton set off along the shore, and I followed, the fractured rock shifting beneath me. âNow watch this,â he said. He stood before a lumpy, person-size boulder, which blocked our way. âHereâs how you get across. You throw your stickââhe tossed his to the other side of the boulderââand then there are footholds here and here, and a handhold here.â I watched as he scrambled over with easy familiarity, and then, more tentatively, I took the same steps myself.
Whenever we learn, our networks of neurons changeâbut how, exactly? Researchers like Hinton, working with computers, sought to discover âlearning algorithmsâ for neural nets, procedures through which the statistical âweightsâ of the connections among artificial neurons could change to assimilate new knowledge. In 1949, a psychologist named Donald Hebb proposed a simple rule for how people learn, often summarized as âNeurons that fire together wire together.â Once a group of neurons in your brain activates in synchrony, itâs more likely to do so again; this helps explain why doing something is easier the second time. But it quickly became apparent that computerized neural networks needed another approach in order to solve complicated problems. As a young researcher, in the nineteen-sixties and seventies, Hinton drew networks of neurons in notebooks and imagined new knowledge arriving at their borders. How would a network of a few hundred artificial neurons store a concept? How would it revise that concept if it turned out to be flawed?
We made our way around the shore to Hintonâs cottage, the only one on the island. Glass-enclosed, it stood on stilts atop a staircase of broad, dark rocks. âOne time, we came out here and a huge water snake stuck his head up,â Hinton said, as we neared the house. It was a fond memory. His father, a celebrated entomologist whoâd named a little-known stage of metamorphosis, had instilled in him an affection for cold-blooded creatures. When he was a child, he and his dad kept a pit full of vipers, turtles, frogs, toads, and lizards in the garage. Today, when Hinton is on the islandâhe is often there in the warmer monthsâhe sometimes finds snakes and brings them into the house, so that he can watch them in a terrarium. He is a good observer of nonhuman minds, having spent a lifetime thinking about thinking from the bottom up.
Earlier this year, Hinton left Google, where heâd worked since the acquisition. He was worried about the potential of A.I. to do harm, and began giving interviews in which he talked about the âexistential threatâ that the technology might pose to the human species. The more he used ChatGPT, an A.I. system trained on a vast corpus of human writing, the more uneasy he got. One day, someone from Fox News wrote to him asking for an interview about artificial intelligence. Hinton enjoys sending snarky single-sentence replies to e-mailsâafter receiving a lengthy note from a Canadian intelligence agency, he responded, âSnowden is my heroââand he began experimenting with a few one-liners. Eventually, he wrote, âFox News is an oxy moron.â Then, on a lark, he asked ChatGPT if it could explain his joke. The system told him his sentence implied that Fox News was fake news, and, when he called attention to the space before âmoron,â it explained that Fox News was addictive, like the drug OxyContin. Hinton was astonished. This level of understanding seemed to represent a new era in A.I.
There are many reasons to be concerned about the advent of artificial intelligence. Itâs common sense to worry about human workers being replaced by computers, for example. But Hinton has joined many prominent technologists, including Sam Altman, the C.E.O. of OpenAI, in warning that A.I. systems may start to think for themselves, and even seek to take over or eliminate human civilization. It was striking to hear one of A.I.âs most prominent researchers give voice to such an alarming view.
âPeople say, Itâs just glorified autocomplete,â he told me, standing in his kitchen. (He has suffered from back pain for most of his life; it eventually grew so severe that he gave up sitting. He has not sat down for more than an hour since 2005.) âNow, letâs analyze that. Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand whatâs being said. Thatâs the only way. So by training something to be really good at predicting the next word, youâre actually forcing it to understand. Yes, itâs âautocompleteââbut you didnât think through what it means to have a really good autocomplete.â Hinton thinks that âlarge language models,â such as GPT, which powers OpenAIâs chatbots, can comprehend the meanings of words and ideas.
Skeptics who say that we overestimate the power of A.I. point out that a great deal separates human minds from neural nets. For one thing, neural nets donât learn the way we do: we acquire knowledge organically, by having experiences and grasping their relationship to reality and ourselves, while they learn abstractly, by processing huge repositories of information about a world that they donât really inhabit. But Hinton argues that the intelligence displayed by A.I. systems transcends its artificial origins.
âWhen you eat, you take food in, and you break it down to these tiny components,â he told me. âSo you could say that the bits in my body are made from bits of other animals. But that would be very misleading.â He believes that, by analyzing human writing, a large language model like GPT learns how the world works, producing a system capable of thought; writing is only part of what that system can do. âItâs analogous to how a caterpillar turns into a butterfly,â he went on. âIn the chrysalis, you turn the caterpillar into soupâand from this soup you build the butterfly.â
He began rooting around in a small cupboard just off the kitchen. âAha!â he said. With a flourish, he put an object on the counterâa dead dragonfly. It was perfectly preserved. âI found this at the marina,â he explained. âIt had just hatched on a rock and was drying its wings, so I caught it. Look underneath.â Hinton had captured the dragonfly just after it had emerged from its larval form. The larva was a quite different-looking insect, with its own eyes and legs; it had a hole in its back, through which the dragonfly had crawled.
âThe larva of the dragonfly is this monster that lives under the water,â Hinton said. âAnd, like in the movie âAlien,â the dragonfly is breaking out of the back of the monster. The larva went into a phase where it got turned into soup, and then a dragonfly was built out of the soup.â In his metaphor, the larva represented the data that had gone into training modern neural nets; the dragonfly stood for the agile A.I. that had been created from it. Deep learningâthe technology that Hinton helped pioneerâhad caused the metamorphosis. I bent closer to get a better look; Hinton stood upright, as he almost always does, careful to preserve his posture. âItâs very beautiful,â he said softly. âAnd you get the point. It started as one thing, and itâs become something else.â
A few weeks earlier, when Hinton had invited me to visit his island, Iâd imagined possible scenarios. Perhaps heâd be an introvert who wanted solitude, or a tech overlord with a God complex and a futuristic compound. Several days before my arrival, he e-mailed me a photograph heâd taken of a rattlesnake coiled in the islandâs grass. I wasnât sure whether I felt delighted or scared.
In fact, as private islands go, Hintonâs is fairly modestâtwo acres in total. Hinton himself is the opposite of a Silicon Valley techno-messiah. Now seventy-five, he has an English face out of a Joshua Reynolds painting, with white hair framing a broad forehead; his blue eyes are often steady, leaving his mouth to express emotion. A mordant raconteur, he enjoys talking about himselfââ âGeoffâ is an anagram for âego fortissimo,â â he told meâbut heâs not an egotist; his life has been too grief-shadowed for that. âI should probably tell you about my wives,â he said, the first time we spoke. âIâve had three marriages. One ended amicably, the other two in tragedy.â He is still friendly with Joanne, his first wife, whom he married early, but his second and third wives, Rosalind and Jackie, both died of cancer, in 1994 and 2018, respectively. For the past four years, Hinton has been with Rosemary Gartner, a retired sociologist. âI think heâs the kind of person who always needs a partner,â she told me, tenderly. He is a romantic rationalist, with a sensibility balancing science and emotion. In the cottage, a burgundy canoe sits in the single large room that makes up most of the ground floor; he and Jackie had found it in the islandâs woods, in disrepair, and Jackie, an art historian, worked with some women canoe-builders to reconstruct it during the years coinciding with her illness. âShe had the maiden voyage,â Hinton said. No one has used it since.
He stowed the dragonfly, then walked over to a small standing desk, where a laptop was perched next to a pile of sudoku puzzles and a notebook containing computer passwords. (He rarely uses the notebook, having devised a mnemonic system that enables him to generate and recall very long passwords in his head.) âShall we do the family tree?â he asked. Using two fingersâhe doesnât touch-typeâhe entered âGeoffrey Hinton family treeâ and hit Return. When Google acquired Hintonâs startup, in 2013, it did so in part because the team had figured out how to dramatically improve image recognition using neural nets; now endless family trees swarmed the screen.
Hinton comes from a particular kind of scientific English family: politically radical, restlessly inventive. Above him in the family tree are his great-uncle Sebastian Hinton, the inventor of the jungle gym, and his cousin Joan Hinton, who worked as a physicist on the Manhattan Project. Further back, he was preceded by Lucy Everest, the first woman to become an elected member of the Royal Institute of Chemistry; Charles Howard Hinton, the mathematician who created the concept of the tesseract, a doorway into the fourth dimension (one appears in the film âInterstellarâ); and James Hinton, a groundbreaking ear surgeon and an advocate of polygamy. (âChrist was the savior of men, but I am the savior of women,â he is said to have remarked.) In the mid-nineteenth century, a great-great-grandfather of Hintonâs, the English mathematician George Boole, developed the system of binary reasoning, now known as Boolean algebra, that is fundamental to all computing. Boole was married to Mary Everest, a mathematician and author and the niece of George Everest, the surveyor for whom Mt. Everest is named.
âGeoff was born into science,â Yann LeCun, a former student and collaborator of Hintonâs who now runs A.I. at Meta, told me. Yet Hintonâs family was odder than that. His dad, Howard Everest Hinton, grew up in Mexico during the Mexican Revolution, in the nineteen-tens, on a silver mine managed by his father. âHe was tough,â Hinton said of his dad: family lore holds that, at age twelve, Howard threatened to shoot his boxing coach for being too heavy-handed, and the coach took him seriously enough to leave town. Howardâs first language was Spanish, and at Berkeley, where he went to college, he was mocked for his accent. âHe hung out with a bunch of Filipinos, who were also discriminated against, and he became a Berkeley radical,â Hinton said. Howardâs mature politics were not just Marxist but Stalinist: in 1968, as Soviet tanks rolled into Prague, he said, âAbout time!â
At school, Hinton was inclined toward science. But, for ideological reasons, his father forbade him to study biology; in Howardâs view, the possibility of genetic determinism contravened the Communist belief in the ultimate malleability of human nature. (âI hate faiths of all kinds,â Hinton said, remembering this period.) Howard, who taught at the University of Bristol, was a kind of entomologist Indiana Jones: he smuggled rare creatures from around the world back to England in his luggage, and edited an important journal in his field. Hinton, whose middle name is also Everest, felt immense pressure to make his own mark. He recalls his father telling him, âIf you work twice as hard as me, when youâre twice as old as I am you might be half as good.â
At Cambridge, Hinton tried different fields but was dismayed to find that he was never the brightest student in any given class. He left college briefly to âread depressing novelsâ and to do odd jobs in London, then returned to attempt architecture, for about a day. Finally, after dipping into physics, chemistry, physiology, and philosophy, looking for a focus, he settled on a degree in experimental psychology. He haunted the office hours of the moral philosopher Bernard Williams, who turned out to be interested in computers and the mind. One day, Williams pointed out that our different thoughts must reflect different physical arrangements inside our brains; this was quite unlike the situation inside a computer, in which the software was independent of the hardware. Hinton was struck by this observation; he remembered how, in high school, a friend had told him that memory might be stored in the brain âholographicallyââthat is, spread out, but in such a way that the whole could be accessed through any one part. What he was encountering was âconnectionismââan approach that combined neuroscience, math, philosophy, and programming to explore how neurons could work together to âthink.â One goal of connectionism was to create a brainlike system in a computer. There had been some progress: the Perceptron, a machine built in the nineteen-fifties by a psychologist and pioneering connectionist named Frank Rosenblatt, had used simple computer hardware to simulate a network of hundreds of neurons. When connected to a light sensor, the apparatus could recognize letters and shapes by tracking which artificial neurons were activated by different patterns of light.
In the cottage, Hinton stood and strolled, ranging back and forth behind the kitchen counter and around the first floor. He made some toast, got us each an apple, and then set up a little booster table for himself using a step stool. Family pressure had had the effect of pushing him out of temporary satisfactions. âI always loved woodwork,â he recalled wistfully, while we ate. âAt school, you could do it voluntarily in the evenings. And Iâve often wondered whether Iâd have been happier as an architect, because I didnât have to force myself to do it. Whereas, with science, Iâve always had to force myself. Because of the family, I had to succeed at itâI had to find a path. There was joy in it, but it was mostly anxiety. Now itâs an enormous relief that Iâve succeeded.â
Hintonâs laptop dinged. Ever since heâd left Google, his in-box had been exploding with requests for comment on A.I. He ambled over and looked at the e-mail, and then got lost again in the forest of family trees, all of which seemed to be wrong in one way or another.
âLook at this,â he said.
I walked over and peered at the screen. It was an âacademic family tree,â showing Hinton at the top with his students, and theirs, arrayed below. The tree was so broad that he had to scroll horizontally to see the extent of his influence. âOh, dear,â Hinton said, exploring. âShe wasnât really a student of mine.â He scrolled further. âHe was brilliant but not so good as an adviser, because he could always do it better himself.â A careful nurturer of talent, Hinton seems to enjoy being surpassed by his students: when evaluating job candidates, he used to ask their advisers, âBut are they better than you?â Recalling his father, who died in 1977, Hinton said, âHe was just extremely competitive. And Iâve often wondered, if heâd been around to see me be successful, whether heâd have been entirely happy. Because now Iâve been more successful than he was.â
According to Google Scholar, Hinton is now the second most cited researcher among psychologists, and the most cited among computer and cognitive scientists. If he had a slow and eccentric start at Cambridge, it was partly because he was circling an emerging field. âNeural networksâthere were very few people at good universities who did it,â he said, closing the laptop. âYou couldnât do it at M.I.T. You couldnât do it at Berkeley. You couldnât do it at Stanford.â There were advantages to being a hub in a nascent network. For years, many of the best minds came to him.
âThe weatherâs good,â Hinton said, the next morning. âWe should cut down a tree.â He wore a dress shirt tucked into khakis and didnât look much like a lumberjack; still, he rubbed his hands together. On the island, he is always cutting down trees to create more orderly and beautiful tableaus.
The house, too, is a work in progress. Few contractors would travel to a place so remote, and the people Hinton hired made needless mistakes (running a drainage pipe uphill, leaving floors half finished) that still enrage him today. Almost every room harbors a corrective mini-project, and, when I visited, Hinton had appended little notes to them to help a new contractor, often writing on the building materials themselves. In the first-floor bathroom, a piece of baseboard propped against the wall read âBathroom should have THIS type of baseboard (maple trim in front of shower only).â In the guest-room closet, masking tape ran along a shelf: âDo not prime shelf, prime shelf support.â
Itâs useful for minds to label things; it helps them get a grip on reality. But what would it mean for an artificial mind to do so? While Hinton was earning a Ph.D. in artificial intelligence from the University of Edinburgh, he thought about how âknowingâ in a brain might be simulated in a computer. At that time, in the nineteen-seventies, the vast majority of A.I. researchers were âsymbolists.â In their view, knowing about, say, ketchup might involve a number of concepts, such as âfood,â âsauce,â âcondiment,â âsweet,â âumami,â âred,â âtomato,â âAmerican,â âFrench fries,â âmayo,â and âmustardâ; together, these could create a scaffold on which a new concept like âketchupâ might be hung. A large, well-funded A.I. effort called Cyc centered on the construction of a vast knowledge repository into which scientists, using a special language, could enter concepts, facts, and rules, along with their inevitable exceptions. (Birds fly, but not penguins or birds with damaged wings or . . .)
But Hinton was doubtful of this approach. It seemed too rigid, and too focussed on the reasoning skills possessed by philosophers and linguists. In nature, he knew, many animals acted intelligently without access to concepts that could be expressed in words. They simply learned how to be smart through experience. Learning, not knowledge, was the engine of intelligence.
Sophisticated human thinking often seemed to happen through symbols and words. But Hinton and his collaborators, James L. McClelland and David Rumelhart, believed that much of the action happened on a sub-conceptual level. Notice, they wrote, how, âif you learn a new fact about an object, your expectations about other similar objects tend to changeâ: if youâre told that chimpanzees like onions, for instance, you might guess that gorillas like them, too. This suggested that knowledge was likely âdistributedâ in the mindâcreated out of smaller building blocks that could be shared among related ideas. There wouldnât be two separate networks of neurons for the concepts âchimpanzeeâ and âgorillaâ; instead, bundles of neurons representing various concrete or abstract âfeaturesââfurriness, quadrupedness, primateness, animalness, intelligence, wildness, and so onâmight be activated in one way to signify âchimpanzeeâ and in a slightly different way to signify âgorilla.â To this cloud of features, onion-liking-ness might be added. A mind constructed this way risked falling into confusion and error: mix qualities together in the wrong arrangement and youâd get a fantasy creature that was neither gorilla nor chimp. But a brain with the right learning algorithm might adjust the weights among its neurons to favor sensible combinations over incoherent ones.
Hinton continued to explore these ideas, first at the University of California, San Diego, where he did a postdoc (and married Joanne, whom he tutored in computer vision); then at Cambridge, where he worked as a researcher in applied psychology; and then at Carnegie Mellon, in Pittsburgh, where he became a computer-science professor in 1982. There, he spent much of his research budget on a single computer powerful enough to run a neural net. He soon got married a second time, to Rosalind Zalin, a molecular biologist. At Carnegie Mellon, Hinton had a breakthrough. Working with Terrence Sejnowski, a computer scientist and a neuroscientist, he produced a neural net called the Boltzmann Machine. The system was named for Ludwig Boltzmann, the nineteenth-century Austrian physicist who described, mathematically, how the large-scale behavior of gases was related to the small-scale behavior of their constituent particles. Hinton and Sejnowski combined these equations with a theory of learning.
Hinton was reluctant to explain the Boltzmann Machine to me. âIâll tell you what this is like,â he said. âItâs like having a small child, and you decide to go on a walk. And thereâs a mountain ahead of you, and you have to get this little child to the top of the mountain and back.â He looked at meâthe child in the metaphorâand sighed. He worried, reasonably, that I might be misled by a simplified explanation and then mislead others. âItâs no use trying to explain complicated ideas that you donât understand. First, you have to understand how something works. Otherwise, you just produce nonsense.â Finally, he took some sheets of paper and began drawing diagrams of neurons connected by arrows and writing out equations, which I tried to follow. (Ahead of my visit, Iâd done a Khan Academy course on linear algebra.)
One way to understand the Boltzmann Machine, he suggested, was to imagine an Identi-Kit: a system through which various features of a faceâbushy eyebrows, blue eyes, crooked noses, thin lips, big ears, and so onâcan be combined to produce a composite sketch, of the sort used by the police. For an Identi-Kit to work, the features themselves have to be appropriately designed. The Boltzmann Machine could learn not just to assemble the features but to design them, by altering the weights of the connections among its artificial neurons. It would start with random features that looked like snow on a television screen, and then proceed in two phasesââwakingâ and âsleepingââto refine them. While awake, it would tweak the features so that they better fit an actual face. While asleep, it would fantasize a face that didnât exist, and then alter the features so that they were a worse fit.
Its dreams told it what not to learn. There was an elegance to the system: over time, it could move away from error and toward reality, and no one had to tell it if it was right or wrongâit needed only to see what existed, and to dream about what didnât.
Hinton and Sejnowski described the Boltzmann Machine in a 1983 paper. âI read that paper when I was starting my graduate studies, and I said, âI absolutely have to talk to these guysâtheyâre the only people in the world who understand that we need learning algorithms,â â Yann LeCun told me. In the mid-eighties, Yoshua Bengio, a pioneer in natural-language processing and in computer vision who is now the scientific director at Mila, an A.I. institute in Quebec, trained a Boltzmann Machine to recognize spoken syllables as part of his masterâs thesis. âGeoff was one of the external reviewers,â he recalled. âAnd he wrote something like âThis should not work.â â Bengioâs version of the Boltzmann Machine was more effective than Hinton expected; it took Bengio a few years to figure out why. This would become a familiar pattern. In the following decades, neural nets would often perform better than expected, perhaps because new structures had formed among the neurons during training. âThe experimental part of the work came before the theory,â Bengio recalled. Often, it was a matter of trying new approaches and seeing what the networks came up with.
Partly because Rosalind loathed Ronald Reagan, Hinton said, they moved to the University of Toronto. They adopted two children, a boy and a girl, from Latin America, and lived in a house in the city. âI was this kind of socialist professor who was dedicated to his work,â Hinton said.
Rosalind had struggled with infertility, and had bad experiences with callous doctors. Perhaps as a result, she pursued a homeopathic route when she was later diagnosed with ovarian cancer. âIt just didnât make any sense,â Hinton said. âIt couldnât be that you make things more dilute and they get more powerful.â He couldnât see how a molecular biologist could become a homeopath. Still, determined to treat the cancer herself, Rosalind refused to have surgery even after an exam found a tumor the size of a grapefruit; later, she consented to an operation but declined chemotherapy, instead pursuing increasingly expensive homeopathic remedies, first in Canada and then in Switzerland. She developed secondary tumors. She asked Hinton to sell their house so that she could pay for new homeopathic treatments. âI drew the line there,â he recalled, squinting with fresh pain. âI said, âNo, weâre not selling the house. Because if you die Iâm going to have to look after the children, and itâs much better for them if we can stay.â â
Rosalind returned to Canada and went immediately into the hospital. She hung on for a couple of months, but wouldnât let the children visit her until the day before she died, because she didnât want them to see her so sick. Throughout her illness, she was convinced that sheâd soon get well. Describing what happened, Hinton still seems overwhelmedâhe is angry, guilty, wounded, mystified. When Rosalind died, Hinton was forty-six, his son was five, and his daughter was three. âShe hurt people by failing to accept that she was going to die,â he said.
The sound of waves filled the midafternoon quiet. Strong yellow sun spilled through the roomâs floor-to-ceiling windows; faint spiderwebs extended across them, silhouetted by the light. Hinton stood for a while, collecting himself.
âI think I need to go cut down a tree,â he said.
We walked out the front door and down the path to the sheds. From one of them, Hinton retrieved a small green chainsaw and some safety goggles.
âRosemary says Iâm not allowed to cut down trees when thereâs nobody else here, in case I chop off an arm or something,â he said. âHave you driven boats before?â
âNo,â I said.
âIâve got to not chop off my right arm, then.â
Over his khakis, he strapped on a pair of protective chaps.
âI donât want to give you the impression that I know what Iâm doing,â he said. âBut the basic idea is, you cut lots of Vâs, and then the tree falls down.â
Hinton crossed the path to the tree that he had in mind, inspecting the bushes for snakes as we walked. The tree was a leafy cedar, perhaps twenty feet tall; Hinton looked up to see which way it was leaning, then started the saw and began to cut into the trunk on the side of the lean. He removed the saw, and made another converging cut to form a V.
Hinton worked the chainsaw in silence, occasionally stopping to wipe his brow. It was hot in the sun, and mosquitoes swarmed every shady nook. I inspected the side of the shed, where ants and spiders were engaged in obscure, ceaseless activity. Down at the end of the path, the water shone. It was a beautiful spot. Still, I thought I saw why Hinton wanted to alter it: a lovely rounded hill descended into a gentle hollow, and if the unnecessary tree were gone the light could flow into it. The tree was an error.
Eventually, he began a second cut on the other side of the tree, angling it toward the first. Then he stopped and turned to me. âBecause the tree leans away from the cut, the V will open up as you go deeper, and the blade wonât get stuck,â he explained. He continued the upper cut, nudging the tree toward an entropic moment. Suddenly, almost soundlessly, gravity took over. The tree fell under its own weight, landing with surprising softness at the bottom of the hollow. The light streamed in.
Hinton was in love with the Boltzmann Machine. He hoped that it, or something like it, might underlie learning in the actual brain. âIt should be true,â he told me. âIf I was God, Iâd make it true.â But further experimentation revealed that as Boltzmann Machines grew they tended to become overwhelmed by the randomness that was built into them. âGeoff and I disagreed about the Boltzmann Machine,â LeCun said. âGeoff thought it was the most beautiful algorithm. I thought it was ugly. It was stochasticââthat is, based partly on randomness. By contrast, LeCun said, âI thought backprop was super clean.â
âBackprop,â or backpropagation, was an algorithm that had been explored by a few different researchers beginning in the nineteen-sixties. Even as Hinton was working with Sejnowski on the Boltzmann Machine, he was also collaborating with Rumelhart and another computer scientist, Ronald Williams, on backprop. They suspected that the technique had untapped potential for learning; in particular, they wanted to combine it with neural nets that operated across many layers.
One way to understand backprop is to imagine a Kafkaesque judicial system. Picture an upper layer of a neural net as a jury that must try cases in perpetuity. The jury has just reached a verdict. In the dystopia in which backprop unfolds, the judge can tell the jurors that their verdict was wrong, and that they will be punished until they reform their ways. The jurors discover that three of them were especially influential in leading the group down the wrong path. This apportionment of blame is the first step in backpropagation.
In the next step, the three wrongheaded jurors determine how they themselves became misinformed. They consider their own influencesâparents, teachers, pundits, and the likeâand identify the individuals who misinformed them. Those blameworthy influencers, in turn, must identify their respective influences and apportion blame among them. Recursive rounds of finger-pointing ensue, as each layer of influencers calls its own influences to account, in a backward-sweeping cascade. Eventually, once itâs known who has misinformed whom and by how much, the network adjusts itself proportionately, so that individuals listen to their âbadâ influences a little less and to their âgoodâ influences a little more. The whole process repeats again and again, with mathematical precision, until verdictsânot just in this one case but in all casesâare collectively as âcorrectâ as possible.
In 1986, Hinton, Rumelhart, and Williams published a three-page paper in Nature showing how such a system could work in a neural net. They noted that backprop, like the Boltzmann Machine, wasnât âa plausible model of learning in brainsâ: unlike a computer, a brain canât rewind the tape to audit its past performance. But backprop still enabled a brainlike neural specialization. In real brains, neurons are sometimes arranged in structures aimed at solving specific problems: in the visual system, for instance, different âcolumnsâ of neurons recognize edges in what we see. Something similar emerges in a backprop network. Higher layers subject lower ones to a kind of evolutionary pressure; as a result, certain layers of a network thatâs tasked with deciphering handwriting, for instance, might become tightly focussed on identifying lines, curves, or edges. Eventually, the system as a whole can develop âappropriate internal representations.â The network knows, and makes use of its knowledge.
In the nineteen-fifties and sixties, a great deal of excitement had accompanied the Perceptron and other connectionist efforts; enthusiasm for connectionism waned in the years after. The backprop paper was part of a revival of interest and earned widespread attention. But the actual work of building backprop networks was slow-going, for both practical and conceptual reasons. Practically, computers were sluggish. âThe rate of progress was basically, How much could a computer learn overnight?â Hinton recalled. âThe answer was often not much.â Conceptually, neural nets were mysterious. It wasnât possible to program one in the traditional way. You couldnât go in and edit the weights of the connections among artificial neurons. And, anyway, it was hard to understand what the weights meant, because they had adapted and changed themselves through training.
There were many ways the learning process could go wrong. In âoverfitting,â for example, a network effectively memorized the training data instead of learning to generalize from it. Avoiding the various pitfalls wasnât always straightforward, because it was up to the network to learn. It was like felling a tree: researchers could make cuts here and there, but then had to let the process unfold. They could try techniques like âensemblingâ (combining weak networks to make a strong one) or âearly stoppingâ (letting a network learn, but not too much). They could âpre-trainâ a system, by taking a Boltzmann Machine, having it learn something, and then layering a backprop network on top of it, so that a systemâs âsupervisedâ training didnât begin until it had acquired some elemental knowledge on its own. Then theyâd let the network learn, hoping that it would land where they wanted it.
New neural-net âarchitecturesâ were developed: ârecurrentâ and âconvolutionalâ networks allowed the systems to make progress by building on their own work in different ways. But it was as though researchers had discovered an alien technology that they didnât know how to use. They turned the Rubikâs Cube this way and that, trying to pull order out of noise. âI was always convinced it wasnât nonsense,â Hinton said. âIt wasnât really faithâit was just completely obvious to me.â The brain used neurons to learn; therefore, complex learning through neural networks must be possible. He would work twice as hard for twice as long.
When networks were trained through backprop, they needed to be told when they were wrong and by how much; this required vast amounts of accurately labelled data, which would allow networks to see the difference between a handwritten â7â and a â1,â or between a golden retriever and a red setter. But it was hard to find well-labelled datasets that were big enough, and building more was a slog. LeCun and his collaborators developed a giant database of handwritten numerals, which they later used to train networks that could read sample Zip Codes provided by the U.S. Postal Service. A computer scientist named Fei Fei Li, at Stanford, spearheaded a gargantuan effort called ImageNet; creating it required collecting more than fourteen million images and sorting them into twenty thousand categories by hand.
As neural nets grew larger, Hinton devised a way of getting knowledge from a large network into a smaller one that might run on a device like a mobile phone. âItâs called distillation,â he explained, in his kitchen. âBack in school, the art teacher would show us some slides and say, âThatâs a Rubens, and thatâs a van Gogh, and this is William Blake.â But suppose that the art teacher tells you, âO.K., this is a Titian, but itâs a peculiar Titian because aspects of it are quite like a Raphael, which is very unusual for a Titian.â Thatâs much more helpful. Theyâre not just telling you the right answerâtheyâre telling you other plausible answers.â In distillation learning, one neural net provides another not just with correct answers but with a range of possible answers and their probabilities. It was a richer kind of knowledge.
A few years after Rosalindâs death, Hinton reconnected with Jacqueline Ford, an art historian whom heâd dated briefly before moving to the United States. Jackie was cultured, warm, curious, beautiful. âSheâs way out of your league,â his sister said. Still, Jackie gave up her job in the U.K. to move to Toronto. They got married on December 6, 1997âHintonâs fiftieth birthday. The following decades would be the happiest of his life. His family was whole again. His children loved their new mother. He and Jackie started exploring the islands in Georgian Bay. Recalling this time, he gazed at the canoe in his living room. âWe found it in the woods, upside down, covered in canvas, and it was just totally rottenâeverything about it was rotten,â he said. âBut Jackie decided to rescue it anyway, like she did with me and the kids.â
Hinton was not in love with backpropagation. âItâs so unsatisfying intellectually,â he told me. Unlike the Boltzmann Machine, âitâs all deterministic. Unfortunately, it just works better.â Slowly, as practical advances compounded, the power of backprop became undeniable. In the early seventies, Hinton told me, the British government had hired a mathematician named James Lighthill to determine if A.I. research had any plausible chance of success. Lighthill concluded that it didnâtââand he was right,â Hinton said, âif you accepted the assumption, which everyone made, that computers might get a thousand times faster, but they wouldnât get a billion times faster.â Hinton did a calculation in his head. Suppose that in 1985 heâd started running a program on a fast research computer, and left it running until now. If he started running the same program today, on the fastest systems currently used in A.I., it would take less than a second to catch up.
In the early two-thousands, as multi-layer neural nets equipped with powerful computers began to train on much larger data sets, Hinton, Bengio, and LeCun started talking about the potential of âdeep learning.â The work crossed a threshold in 2012, when Hinton, Alex Krizhevsky, and Ilya Sutskever came out with AlexNet, an eight-layer neural network that was eventually able to recognize objects from ImageNet with human-level accuracy. Hinton formed a company with Krizhevsky and Sutskever and sold it to Google. He and Jackie bought the island in Georgian Bayââmy one real indulgence,â Hinton said.
Two years later, Jackie was diagnosed with pancreatic cancer. Doctors gave her a year or two to live. âShe was incredibly brave and incredibly rational,â Hinton said. âShe wasnât in deep denial, desperately trying to get out of it. Her view was âI can feel sorry for myself, or I can say I donât have much time left and Iâd better do my best to enjoy it and make everything O.K. for other people.â â She and Hinton pored over the statistics before deciding on therapies; largely through chemo, she extended one or two years to three. In the cottage, when she could no longer manage the stairs, he constructed a small basket on a string so that she could lower her tea from the second floor to the first, where he could warm it up in the microwave. (âI shouldâve just moved the microwave upstairs,â he observed.)
Late in the day, we leaned on Hintonâs standing desk as he showed me photos of Jackie on his laptop. In a picture of their wedding day, she and Hinton stand with his kids in the living room of their neighborâs house, exchanging vows. Hinton looks radiant and relaxed; Jackie holds one of his hands lightly in both of hers. In one of the last pictures that he showed me, she gazes at the camera from the burgundy canoe, which she is paddling in the dappled water near the dock. âThat was the summer of 2017,â Hinton said. Jackie died the following April. That June, Hinton, Bengio, and LeCun won the Turing Awardâthe equivalent of the Nobel Prize in computer science.
Hinton is convinced that thereâs a real sense in which neural nets are capable of having feelings. âI think feelings are counterfactual statements about what would have caused an action,â he had told me, earlier that day. âSay that I feel like punching someone on the nose. What I mean is: if I didnât have social inhibitionsâif I didnât stop myself from doing itâI would punch him on the nose. So when I say âI feel angry,â itâs a kind of abbreviation for saying, âI feel like doing an aggressive act.â Feelings are just a way of talking about inclinations to action.â
He told me that he had seen a âfrustrated A.I.â in 1973. A computer had been attached to two TV cameras and a simple robot arm; the system was tasked with assembling some blocks, spread out on a table, into the form of a toy car. âThis was hard, particularly in 1973,â he said. âThe vision system could recognize the bits if they were all separate, but if you put them in a little pile it couldnât recognize them. So what did it do? It pulled back a little bit, and went bash!, and spread them over the table. Basically, it couldnât deal with what was going on, so it changed it, violently. And if a person did that youâd say they were frustrated. The computer couldnât see the blocks right, so he bashed them.â To have a feeling was to want what you couldnât have.
âI love this house, but sometimes itâs a sad place,â he said, while we looked at the pictures. âBecause she loved being here and isnât here.â
The sun had almost set, and Hinton turned on a little light over his desk. He closed the computer and pushed his glasses up on his nose. He squared up his shoulders, returning to the present.
âI wanted you to know about Roz and Jackie because theyâre an important part of my life,â he said. âBut, actually, itâs also quite relevant to artificial intelligence. There are two approaches to A.I. Thereâs denial, and thereâs stoicism. Everybodyâs first reaction to A.I. is âWeâve got to stop this.â Just like everybodyâs first reaction to cancer is âHow are we going to cut it out?â â But it was important to recognize when cutting it out was just a fantasy.
He sighed. âWe canât be in denial,â he said. âWe have to be real. We need to think, How do we make it not as awful for humanity as it might be?â
How usefulâor dangerousâwill A.I. turn out to be? No one knows for sure, in part because neural nets are so strange. In the twentieth century, many researchers wanted to build computers that mimicked brains. But, although neural nets like OpenAIâs GPT models are brainlike in that they involve billions of artificial neurons, theyâre actually profoundly different from biological brains. Todayâs A.I.s are based in the cloud and housed in data centers that use power on an industrial scale. Clueless in some ways and savantlike in others, they reason for millions of users, but only when prompted. They are not alive. They have probably passed the Turing testâthe long-heralded standard, established by the computing pioneer Alan Turing, which held that any computer that could persuasively imitate a human in conversation could be said, reasonably, to think. And yet our intuitions may tell us that nothing resident in a browser tab could really be thinking in the way we do. The systems force us to ask if our kind of thinking is the only kind that counts.
During his last few years at Google, Hinton focussed his efforts on creating more traditionally mindlike artificial intelligence using hardware that more closely emulated the brain. In todayâs A.I.s, the weights of the connections among the artificial neurons are stored numerically; itâs as though the brain keeps records about itself. In your actual, analog brain, however, the weights are built into the physical connections between neurons. Hinton worked to create an artificial version of this system using specialized computer chips.
âIf you could do it, it would be amazing,â he told me. The chips would be able to learn by varying their âconductances.â Because the weights would be integrated into the hardware, it would be impossible to copy them from one machine to another; each artificial intelligence would have to learn on its own. âThey would have to go to school,â he said. âBut you would go from using a megawatt to thirty watts.â As he spoke, he leaned forward, his eyes boring into mine; I got a glimpse of Hinton the evangelist. Because the knowledge gained by each A.I. would be lost when it was disassembled, he called the approach âmortal computing.â âWeâd give up on immortality,â he said. âIn literature, you give up being a god for the woman you love, right? In this case, weâd get something far more important, which is energy efficiency.â Among other things, energy efficiency encourages individuality: because a human brain can run on oatmeal, the world can support billions of brains, all different. And each brain can learn continuously, rather than being trained once, then pushed out into the world.
As a scientific enterprise, mortal A.I. might bring us closer to replicating our own brains. But Hinton has come to think, regretfully, that digital intelligence might be more powerful. In analog intelligence, âif the brain dies, the knowledge dies,â he said. By contrast, in digital intelligence, âif a particular computer dies, those same connection strengths can be used on another computer. And, even if all the digital computers died, if youâd stored the connection strengths somewhere you could then just make another digital computer and run the same weights on that other digital computer. Ten thousand neural nets can learn ten thousand different things at the same time, then share what theyâve learned.â This combination of immortality and replicability, he says, suggests that âwe should be concerned about digital intelligence taking over from biological intelligence.â
How should we describe the mental life of a digital intelligence without a mortal body or an individual identity? In recent months, some A.I. researchers have taken to calling GPT a âreasoning engineââa way, perhaps, of sliding out from under the weight of the word âthinking,â which we struggle to define. âPeople blame us for using those wordsââthinking,â âknowing,â âunderstanding,â âdeciding,â and so on,â Bengio told me. âBut even though we donât have a complete understanding of the meaning of those words, theyâve been very powerful ways of creating analogies that help us understand what weâre doing. Itâs helped us a lot to talk about âimagination,â âattention,â âplanning,â âintuitionâ as a tool to clarify and explore.â In Bengioâs view, âa lot of what weâve been doing is solving the âintuitionâ aspect of the mind.â Intuitions might be understood as thoughts that we canât explain: our minds generate them for us, unconsciously, by making connections between what weâre encountering in the present and our past experiences. We tend to prize reason over intuition, but Hinton believes that we are more intuitive than we acknowledge. âFor years, symbolic-A.I. people said our true nature is, weâre reasoning machines,â he told me. âI think thatâs just nonsense. Our true nature is, weâre analogy machines, with a little bit of reasoning built on top, to notice when the analogies are giving us the wrong answers, and correct them.â
On the whole, current A.I. technology is talky and cerebral: it stumbles at the borders of the physical. âAny teen-ager can learn to drive a car in twenty hours of practice, with hardly any supervision,â LeCun told me. âAny cat can jump on a series of pieces of furniture and get to the top of some shelf. We donât have any A.I. systems coming anywhere close to doing these things today, except self-driving carsââand they are over-engineered, requiring âmapping the whole city, hundreds of engineers, hundreds of thousands of hours of training.â Solving the wriggly problems of physical intuition âwill be the big challenge of the next decade,â LeCun said. Still, the basic idea is simple: if neurons can do it, then so can neural nets.
Hinton suspects that skepticism of A.I.âs potential, while comforting, is often motivated by an unjustified faith in human exceptionalism. Researchers complain that A.I. chatbots âhallucinate,â by making up plausible answers to questions that stump them. But he contests that terminology. âWe should say âconfabulate,â â he told me. â âHallucinationâ is when you think thereâs sensory inputâauditory hallucinations, visual hallucinations, olfactory hallucinations. But just making stuff upâthatâs confabulation.â He cited the case of John Dean, President Richard Nixonâs White House counsel, who was interviewed about Watergate before he knew that the conversations he described had been tape-recorded. Dean confabulated, getting the details wrong, mixing up who said what. âBut the gist of it was all right,â Hinton said. âHe had a recollection of what went on, and he imposed that recollection on some characters in his head. He wrote a little play. And thatâs what human memory is like. In our minds, thereâs no boundary between just making it up and telling the truth. Telling the truth is just making it up correctly. Because itâs all in the weights, right?â From this perspective, ChatGPTâs ability to make things up is a flaw, but also a sign of its humanlike intelligence.
Hinton is often asked if he regrets his work. He doesnât. (He recently sent a journalist a one-linerââa song for youââalong with a link to Edith Piafâs âNon, Je Ne Regrette Rien.â) When he began his research, he says, no one thought that the technology would succeed; even when it started succeeding, no one thought that it would succeed so quickly. Precisely because he thinks that A.I. is truly intelligent, he expects that it will contribute to many fields. Yet he fears what will happen when, for instance, powerful people abuse it. âYou can probably imagine Vladimir Putin creating an autonomous lethal weapon and giving it the goal of killing Ukrainians,â Hinton said. He believes that autonomous weapons should be outlawedâthe U.S. military is actively developing themâbut warns that even a benign autonomous system could wreak havoc. âIf you want a system to be effective, you need to give it the ability to create its own subgoals,â he said. âNow, the problem is, thereâs a very general subgoal that helps with almost all goals: get more control. The research question is: how do you prevent them from ever wanting to take control? And nobody knows the answer.â (Control, he noted, doesnât have to be physical: âIt could be just like how Trump could invade the Capitol, with words.â)
Within the field, Hintonâs views are variously shared and disputed. âIâm not scared of A.I.,â LeCun told me. âI think it will be relatively easy to design them so that their objectives will align with ours.â He went on, âThereâs the idea that if a system is intelligent itâs going to want to dominate. But the desire to dominate has nothing to do with intelligenceâit has to do with testosterone.â I recalled the spiders Iâd seen at the cottage, and how their webs covered the surfaces of Hintonâs windows. They didnât want to dominate, eitherâand yet their insectoidal intelligence had led them to expand their territory. Living systems without centralized brains, such as ant colonies, donât âwantâ to do anything, yet they still find food, ford rivers, and kill competitors in vast numbers. Either Hinton or LeCun could be right. The metamorphosis isnât finished. We donât know what A.I. will become.
âWhy donât we just unplug it?â I asked Hinton, of A.I. in general. âIs that a totally unreasonable question?â
âItâs not unreasonable to say, Weâd be better off without thisâitâs not worth it,â he said. âJust as we might have been better off without fossil fuels. Weâd have been far more primitive, but it may not have been worth the risk.â He added, stoically, âBut itâs not going to happen. Because of the way society is. And because of the competition between different nations. If the U.N. really worked, possibly something like that could stop it. Although, even then, A.I. is just so useful. It has so much potential to do good, in fields like medicineâand, of course, to give an advantage to a nation via autonomous weapons.â Earlier this year, Hinton declined to sign a popular petition that called for at least a six-month pause in research. âChinaâs not going to stop developing it for six months,â he said.
âSo what should we do?â I asked.
âI donât know,â he said. âIt would be great if this were like climate change, where someone could say, Look, we either have to stop burning carbon or we have to find an effective way to remove carbon dioxide from the atmosphere. There, you know what the solution looks like. Here, itâs not like that.â
Hinton was pulling on a blue waterproof jacket. We were heading to the marina to pick up Rosemary. âSheâs brought supplies!â he said, smiling. As we walked out the door, I looked back into the cottage. In the big room, the burgundy canoe shone, caressed by sunlight. Chairs were arranged in front of it in a semicircle, facing the water through the windows. Some magazines were piled on a little table. It was a beautiful house. A human mind does more than reason; it exists in time, and reckons with life and death, and builds a world around itself. It gathers meaning, as if by gravity. An A.I., I thought, might be able to imagine a place like this. But would it ever need one?
We made our way down the wooded path, past the sheds and down the steps to the dock, then climbed into Hintonâs boat. It was a perfect blue day, with a brisk wind roughing the water. Hinton stood at the wheel. I sat in front, watching other islands pass, thinking about the story of A.I. To some, itâs a Copernican tale, in which our intuitions about the specialness of the human mind are being dislodged by thinking machines. To others, itâs Prometheanâhaving stolen fire, we risk getting burned. Some people think weâre fooling ourselves, getting taken in by our own machines and the companies that hope to profit from them. In a strange way, it could also be a story about human limitation. If we were gods, we might make a different kind of A.I.; in reality, this version was what we could manage. Meanwhile, I couldnât help but consider the story in an Edenic light. By seeking to re-create the knowledge systems in our heads, we had seized the forbidden apple; we now risked exile from our charmed world. But who would choose not to know how knowing works?
At the marina, Hinton did a good job of working with the wind, accelerating forward, turning, and then allowing it to guide him into his slip. âIâm learning,â he said, proud of himself. We walked ashore and waited by a shop for Rosemary to arrive. After a while, Hinton went inside to buy a light bulb. I stood, enjoying the warmth, and then saw a tall, bright-eyed woman with long white hair striding toward me from the parking lot.
Rosemary and I shook hands. Then she looked over my shoulder. Hinton was emerging from the greenery near the shop, grinning.
âWhatâve you got for me?â she asked.
Hinton held up a black-and-yellow garter snake, perhaps a metre long, twisting round and round like a spring. âIâve come bearing gifts!â he said, in a gallant tone. âI found it in the bushes.â
Rosemary laughed, delighted, and turned to me. âThis just epitomizes him,â she said.
âHeâs not happy,â Hinton said, observing the snake.
âWould you be?â Rosemary asked.
âIâm being very careful with his neck,â Hinton said. âTheyâre fragile.â
He switched the snake from one hand to another, then held out a palm. It was covered in the snakeâs slimy musk.
âHave a sniff,â he said.
We took turns. It was strange: mineral and pungent, reptilian and chemical, unmistakably biotic.
âYouâve got it all over your shirt!â Rosemary said.
âI had to catch him!â Hinton explained.
He put the snake down, and it slithered off into the grass. He watched it go with a satisfied look.
âWell,â he said. âItâs a beautiful day. Shall we brave the crossing?â â¦
An earlier version of this article mischaracterized Geoffrey Hinton's tree-felling process.