Monthly Archives: March 2017

The new AI algorithm monitors with radio waves

More than 50 million Americans suffer from sleep disorders, and diseases including Parkinson’s and Alzheimer’s can also disrupt sleep. Diagnosing and monitoring these conditions usually requires attaching electrodes and a variety of other sensors to patients, which can further disrupt their sleep.

To make it easier to diagnose and study sleep problems, researchers at MIT and Massachusetts General Hospital have devised a new way to monitor sleep stages without sensors attached to the body. Their device uses an advanced artificial intelligence algorithm to analyze the radio signals around the person and translate those measurements into sleep stages: light, deep, or rapid eye movement (REM).

“Imagine if your Wi-Fi router knows when you are dreaming, and can monitor whether you are having enough deep sleep, which is necessary for memory consolidation,” says Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, who led the study. “Our vision is developing health sensors that will disappear into the background and capture physiological signals and important health metrics, without asking the user to change her behavior in any way.”

Katabi worked on the study with Matt Bianchi, chief of the Division of Sleep Medicine at MGH, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science and a member of the Institute for Data, Systems, and Society at MIT. Mingmin Zhao, an MIT graduate student, is the paper’s first author, and Shichao Yue, another MIT graduate student, is also a co-author.

Remote sensing

Katabi and members of her group in MIT’s Computer Science and Artificial Intelligence Laboratory have previously developed radio-based sensors that enable them to remotely measure vital signs and behaviors that can be indicators of health. These sensors consist of a wireless device, about the size of a laptop computer, that emits low-power radio frequency (RF) signals. As the radio waves reflect off of the body, any slight movement of the body alters the frequency of the reflected waves. Analyzing those waves can reveal vital signs such as pulse and breathing rate.

“It’s a smart Wi-Fi-like box that sits in the home and analyzes these reflections and discovers all of these changes in the body, through a signature that the body leaves on the RF signal,” Katabi says.

Katabi and her students have also used this approach to create a sensor called WiGait that can measure walking speed using wireless signals, which could help doctors predict cognitive decline, falls, certain cardiac or pulmonary diseases, or other health problems.

After developing those sensors, Katabi thought that a similar approach could also be useful for monitoring sleep, which is currently done while patients spend the night in a sleep lab hooked up to monitors such as electroencephalography (EEG) machines.

“The opportunity is very big because we don’t understand sleep well, and a high fraction of the population has sleep problems,” says Zhao. “We have this technology that, if we can make it work, can move us from a world where we do sleep studies once every few months in the sleep lab to continuous sleep studies in the home.”

To achieve that, the researchers had to come up with a way to translate their measurements of pulse, breathing rate, and movement into sleep stages. Recent advances in artificial intelligence have made it possible to train computer algorithms known as deep neural networks to extract and analyze information from complex datasets, such as the radio signals obtained from the researchers’ sensor. However, these signals have a great deal of information that is irrelevant to sleep and can be confusing to existing algorithms. The MIT researchers had to come up with a new AI algorithm based on deep neural networks, which eliminates the irrelevant information.

“The surrounding conditions introduce a lot of unwanted variation in what you measure. The novelty lies in preserving the sleep signal while removing the rest,” says Jaakkola. Their algorithm can be used in different locations and with different people, without any calibration.

Using this approach in tests of 25 healthy volunteers, the researchers found that their technique was about 80 percent accurate, which is comparable to the accuracy of ratings determined by sleep specialists based on EEG measurements.

“Our device allows you not only to remove all of these sensors that you put on the person, and make it a much better experience that can be done at home, it also makes the job of the doctor and the sleep technologist much easier,” Katabi says. “They don’t have to go through the data and manually label it.”

Sleep deficiencies

Other researchers have tried to use radio signals to monitor sleep, but these systems are accurate only 65 percent of the time and mainly determine whether a person is awake or asleep, not what sleep stage they are in. Katabi and her colleagues were able to improve on that by training their algorithm to ignore wireless signals that bounce off of other objects in the room and include only data reflected from the sleeping person.

The researchers now plan to use this technology to study how Parkinson’s disease affects sleep.

“When you think about Parkinson’s, you think about it as a movement disorder, but the disease is also associated with very complex sleep deficiencies, which are not very well understood,” Katabi says.

The sensor could also be used to learn more about sleep changes produced by Alzheimer’s disease, as well as sleep disorders such as insomnia and sleep apnea. It may also be useful for studying epileptic seizures that happen during sleep, which are usually difficult to detect.

40 years of internet, the world changed forever

Towards the end of the summer of 1969 – a few weeks after the moon landings, a few days after Woodstock, and a month before the first broadcast of Monty Python’s Flying Circus – a large grey metal box was delivered to the office of Leonard Kleinrock, a professor at the University of California in Los Angeles. It was the same size and shape as a household refrigerator, and outwardly, at least, it had about as much charm. But Kleinrock was thrilled: a photograph from the time shows him standing beside it, in requisite late-60s brown tie and brown trousers, beaming like a proud father.

Had he tried to explain his excitement to anyone but his closest colleagues, they probably wouldn’t have understood. The few outsiders who knew of the box’s existence couldn’t even get its name right: it was an IMP, or “interface message processor”, but the year before, when a Boston company had won the contract to build it, its local senator, Ted Kennedy, sent a telegram praising its ecumenical spirit in creating the first “interfaith message processor”. Needless to say, though, the box that arrived outside Kleinrock’s office wasn’t a machine capable of fostering understanding among the great religions of the world. It was much more important than that.

It’s impossible to say for certain when the internet began, mainly because nobody can agree on what, precisely, the internet is. (This is only partly a philosophical question: it is also a matter of egos, since several of the people who made key contributions are anxious to claim the credit.) But 29 October 1969 – 40 years ago next week – has a strong claim for being, as Kleinrock puts it today, “the day the infant internet uttered its first words”. At 10.30pm, as Kleinrock’s fellow professors and students crowded around, a computer was connected to the IMP, which made contact with a second IMP, attached to a second computer, several hundred miles away at the Stanford Research Institute, and an undergraduate named Charley Kline tapped out a message. Samuel Morse, sending the first telegraph message 125 years previously, chose the portentous phrase: “What hath God wrought?” But Kline’s task was to log in remotely from LA to the Stanford machine, and there was no opportunity for portentousness: his instructions were to type the command LOGIN.

To say that the rest is history is the emptiest of cliches – but trying to express the magnitude of what began that day, and what has happened in the decades since, is an undertaking that quickly exposes the limits of language. It’s interesting to compare how much has changed in computing and the internet since 1969 with, say, how much has changed in world politics. Consider even the briefest summary of how much has happened on the global stage since 1969: the Vietnam war ended; the cold war escalated then declined; the Berlin Wall fell; communism collapsed; Islamic fundamentalism surged. And yet nothing has quite the power to make people in their 30s, 40s or 50s feel very old indeed as reflecting upon the growth of the internet and the world wide web. Twelve years after Charley Kline’s first message on the Arpanet, as it was then known, there were still only 213 computers on the network; but 14 years after that, 16 million people were online, and email was beginning to change the world; the first really usable web browser wasn’t launched until 1993, but by 1995 we had Amazon, by 1998 Google, and by 2001, Wikipedia, at which point there were 513 million people online. Today the figure is more like 1.7 billion.

Unless you are 15 years old or younger, you have lived through the dotcom bubble and bust, the birth of Friends Reunited and Craigslist and eBay and Facebook andTwitter, blogging, the browser wars, Google Earth, filesharing controversies, the transformation of the record industry, political campaigning, activism and campaigning, the media, publishing, consumer banking, the pornography industry, travel agencies, dating and retail; and unless you’re a specialist, you’ve probably only been following the most attention-grabbing developments. Here’s one of countless statistics that are liable to induce feelings akin to vertigo: on New Year’s Day 1994 – only yesterday, in other words – there were an estimated 623 websites. In total. On the whole internet. “This isn’t a matter of ego or crowing,” says Steve Crocker, who was present that day at UCLA in 1969, “but there has not been, in the entire history of mankind, anything that has changed so dramatically as computer communications, in terms of the rate of change.”

Looking back now, Kleinrock and Crocker are both struck by how, as young computer scientists, they were simultaneously aware that they were involved in something momentous and, at the same time, merely addressing a fairly mundane technical problem. On the one hand, they were there because of the Russian Sputnik satellite launch, in 1957, which panicked the American defence establishment, prompting Eisenhower to channel millions of dollars into scientific research, and establishing Arpa, the Advanced Research Projects Agency, to try to win the arms technology race. The idea was “that we would not get surprised again,” said Robert Taylor, the Arpa scientist who secured the money for the Arpanet, persuading the agency’s head to give him a million dollars that had been earmarked for ballistic missile research. With another pioneer of the early internet, JCR Licklider, Taylor co-wrote the paper, “The Computer As A Communication Device”, which hinted at what was to come. “In a few years, men will be able to communicate more effectively through a machine than face to face,” they declared. “That is rather a startling thing to say, but it is our conclusion.”

On the other hand, the breakthrough accomplished that night in 1969 was a decidedly down-to-earth one. The Arpanet was not, in itself, intended as some kind of secret weapon to put the Soviets in their place: it was simply a way to enable researchers to access computers remotely, because computers were still vast and expensive, and the scientists needed a way to share resources. (The notion that the network was designed so that it would survive a nuclear attack is an urban myth, though some of those involved sometimes used that argument to obtain funding.) The technical problem solved by the IMPs wasn’t very exciting, either. It was already possible to link computers by telephone lines, but it was glacially slow, and every computer in the network had to be connected, by a dedicated line, to every other computer, which meant you couldn’t connect more than a handful of machines without everything becoming monstrously complex and costly. The solution, called “packet switching” – which owed its existence to the work of a British physicist, Donald Davies – involved breaking data down into blocks that could be routed around any part of the network that happened to be free, before getting reassembled at the other end.

“I thought this was important, but I didn’t really think it was as challenging as what I thought of as the ‘real research’,” says Crocker, a genial Californian, now 65, who went on to play a key role in the expansion of the internet. “I was particularly fascinated, in those days, by artificial intelligence, and by trying to understand how people think. I thought that was a much more substantial and respectable research topic than merely connecting up a few machines. That was certainly useful, but it wasn’t art.”

Still, Kleinrock recalls a tangible sense of excitement that night as Kline sat down at the SDS Sigma 7 computer, connected to the IMP, and at the same time made telephone contact with his opposite number at Stanford. As his colleagues watched, he typed the letter L, to begin the word LOGIN.

“Have you got the L?” he asked, down the phone line. “Got the L,” the voice at Stanford responded.

Kline typed an O. “Have you got the O?”

“Got the O,” Stanford replied.

Kline typed a G, at which point the system crashed, and the connection was lost. The G didn’t make it through, which meant that, quite by accident, the first message ever transmitted across the nascent internet turned out, after all, to be fittingly biblical:

“LO.”

Frenzied visions of a global conscious brain

One of the most intriguing things about the growth of the internet is this: to a select group of technological thinkers, the surprise wasn’t how quickly it spread across the world, remaking business, culture and politics – but that it took so long to get off the ground. Even when computers were mainly run on punch-cards and paper tape, there were whispers that it was inevitable that they would one day work collectively, in a network, rather than individually. (Tracing the origins of online culture even further back is some people’s idea of an entertaining game: there are those who will tell you that the Talmud, the book of Jewish law, contains a form of hypertext, the linking-and-clicking structure at the heart of the web.) In 1945, the American presidential science adviser, Vannevar Bush, was already imagining the “memex”, a device in which “an individual stores all his books, records, and communications”, which would be linked to each other by “a mesh of associative trails”, like weblinks. Others had frenzied visions of the world’s machines turning into a kind of conscious brain. And in 1946, an astonishingly complete vision of the future appeared in the magazine Astounding Science Fiction. In a story entitled A Logic Named Joe, the author Murray Leinster envisioned a world in which every home was equipped with a tabletop box that he called a “logic”:

“You got a logic in your house. It looks like a vision receiver used to, only it’s got keys instead of dials and you punch the keys for what you wanna get . . . you punch ‘Sally Hancock’s Phone’ an’ the screen blinks an’ sputters an’ you’re hooked up with the logic in her house an’ if somebody answers you got a vision-phone connection. But besides that, if you punch for the weather forecast [or] who was mistress of the White House durin’ Garfield’s administration . . . that comes on the screen too. The relays in the tank do it. The tank is a big buildin’ full of all the facts in creation . . . hooked in with all the other tanks all over the country . . . The only thing it won’t do is tell you exactly what your wife meant when she said, ‘Oh, you think so, do you?’ in that peculiar kinda voice “

Despite all these predictions, though, the arrival of the internet in the shape we know it today was never a matter of inevitability. It was a crucial idiosyncracy of the Arpanet that its funding came from the American defence establishment – but that the millions ended up on university campuses, with researchers who embraced an anti-establishment ethic, and who in many cases were committedly leftwing; one computer scientist took great pleasure in wearing an anti-Vietnam badge to a briefing at the Pentagon. Instead of smothering their research in the utmost secrecy – as you might expect of a cold war project aimed at winning a technological battle against Moscow – they made public every step of their thinking, in documents known as Requests For Comments.

Deliberately or not, they helped encourage a vibrant culture of hobbyists on the fringes of academia – students and rank amateurs who built their own electronic bulletin-board systems and eventually FidoNet, a network to connect them to each other. An argument can be made that these unofficial tinkerings did as much to create the public internet as did the Arpanet. Well into the 90s, by the time the Arpanet had been replaced by NSFNet, a larger government-funded network, it was still the official position that only academic researchers, and those affiliated to them, were supposed to use the network. It was the hobbyists, making unofficial connections into the main system, who first opened the internet up to allcomers.

What made all of this possible, on a technical level, was simultaneously the dullest-sounding and most crucial development since Kleinrock’s first message. This was the software known as TCP/IP, which made it possible for networks to connect to other networks, creating a “network of networks”, capable of expanding virtually infinitely – which is another way of defining what the internet is. It’s for this reason that the inventors of TCP/IP, Vint Cerf and Bob Kahn, are contenders for the title of fathers of the internet, although Kleinrock, understandably, disagrees. “Let me use an analogy,” he says. “You would certainly not credit the birth of aviation to the invention of the jet engine. The Wright Brothers launched aviation. Jet engines greatly improved things.”

The spread of the internet across the Atlantic, through academia and eventually to the public, is a tale too intricate to recount here, though it bears mentioning that British Telecom and the British government didn’t really want the internet at all: along with other European governments, they were in favour of a different networking technology, Open Systems Interconnect. Nevertheless, by July 1992, an Essex-born businessman named Cliff Stanford had opened Demon Internet, Britain’s first commercial internet service provider. Officially, the public still wasn’t meant to be connecting to the internet. “But it was never a real problem,” Stanford says today. “The people trying to enforce that weren’t working very hard to make it happen, and the people working to do the opposite were working much harder.” The French consulate in London was an early customer, paying Demon £10 a month instead of thousands of pounds to lease a private line to Paris from BT.

After a year or so, Demon had between 2,000 and 3,000 users, but they weren’t always clear why they had signed up: it was as if they had sensed the direction of the future, in some inchoate fashion, but hadn’t thought things through any further than that. “The question we always got was: ‘OK, I’m connected – what do I do now?'” Stanford recalls. “It was one of the most common questions on our support line. We would answer with ‘Well, what do you want to do? Do you want to send an email?’ ‘Well, I don’t know anyone with an email address.’ People got connected, but they didn’t know what was meant to happen next.”

Fortunately, a couple of years previously, a British scientist based at Cern, the physics laboratory outside Geneva, had begun to answer that question, and by 1993 his answer was beginning to be known to the general public. What happened next was the web.

The birth of the web

I sent my first email in 1994, not long after arriving at university, from a small, under-ventilated computer room that smelt strongly of sweat. Email had been in existence for decades by then – the @ symbol was introduced in 1971, and the first message, according to the programmer who sent it, Ray Tomlinson, was “something like QWERTYUIOP”. (The test messages, Tomlinson has said, “were entirely forgettable, and I have, therefore, forgotten them”.) But according to an unscientific poll of friends, family and colleagues, 1994 seems fairly typical: I was neither an early adopter nor a late one. A couple of years later I got my first mobile phone, which came with two batteries: a very large one, for normal use, and an extremely large one, for those occasions on which you might actually want a few hours of power. By the time I arrived at the Guardian, email was in use, but only as an add-on to the internal messaging system, operated via chunky beige terminals with green-on-black screens. It took for ever to find the @ symbol on the keyboard, and I don’t remember anything like an inbox, a sent-mail folder, or attachments. I am 34 years old, but sometimes I feel like Methuselah.

I have no recollection of when I first used the world wide web, though it was almost certainly when people still called it the world wide web, or even W3, perhaps in the same breath as the phrase “information superhighway”, made popular by Al Gore. (Or “infobahn”: did any of us really, ever, call the internet the “infobahn”?) For most of us, though, the web is in effect synonymous with the internet, even if we grasp that in technical terms that’s inaccurate: the web is simply a system that sits on top of the internet, making it greatly easier to navigate the information there, and to use it as a medium of sharing and communication. But the distinction rarely seems relevant in everyday life now, which is why its inventor, Tim Berners-Lee, has his own legitimate claim to be the progenitor of the internet as we know it. The first ever website was his own, at CERN: info.cern.ch.

The idea that a network of computers might enable a specific new way of thinking about information, instead of just allowing people to access the data on each other’s terminals, had been around for as long as the idea of the network itself: it’s there in Vannevar Bush’s memex, and Murray Leinster’s logics. But the grandest expression of it was Project Xanadu, launched in 1960 by the American philosopher Ted Nelson, who imagined – and started to build – a vast repository for every piece of writing in existence, with everything connected to everything else according to a principle he called “transclusion”. It was also, presciently, intended as a method for handling many of the problems that would come to plague the media in the age of the internet, automatically channelling small royalties back to the authors of anything that was linked. Xanadu was a mind-spinning vision – and at least according to an unflattering portrayal by Wired magazine in 1995, over which Nelson threatened to sue, led those attempting to create it into a rabbit-hole of confusion, backbiting and “heart-slashing despair”. Nelson continues to develop Xanadu today, arguing that it is a vastly superior alternative to the web. “WE FIGHT ON,” the Xanadu website declares, sounding rather beleaguered, not least since the declaration is made on a website.

Web browsers crossed the border into mainstream use far more rapidly than had been the case with the internet itself: Mosaic launched in 1993 and Netscape followed soon after, though it was an embarrassingly long time before Microsoft realised the commercial necessity of getting involved at all. Amazon and eBay were online by 1995. And in 1998 came Google, offering a powerful new way to search the proliferating mass of information on the web. Until not too long before Google, it had been common for search or directory websites to boast about how much of the web’s information they had indexed – the relic of a brief period, hilarious in hindsight, when a user might genuinely have hoped to check all the webpages that mentioned a given subject. Google, and others, saw that the key to the web’s future would be helping users exclude almost everything on any given topic, restricting search results to the most relevant pages.

Without most of us quite noticing when it happened, the web went from being a strange new curiosity to a background condition of everyday life: I have no memory of there being an intermediate stage, when, say, half the information I needed on a particular topic could be found online, while the other half still required visits to libraries. “I remember the first time I saw a web address on the side of a truck, and I thought, huh, OK, something’s happening here,” says Spike Ilacqua, who years beforehand had helped found The World, the first commercial internet service provider in the US. Finally, he stopped telling acquaintances that he worked in “computers”, and started to say that he worked on “the internet”, and nobody thought that was strange.

It is absurd – though also unavoidable here – to compact the whole of what happened from then onwards into a few sentences: the dotcom boom, the historically unprecedented dotcom bust, the growing “digital divide”, and then the hugely significant flourishing, over the last seven years, of what became known as Web 2.0. It is only this latter period that has revealed the true capacity of the web for “generativity”, for the publishing of blogs by anyone who could type, for podcasting and video-sharing, for the undermining of totalitarian regimes, for the use of sites such as Twitter and Facebook to create (and ruin) friendships, spread fashions and rumours, or organise political resistance. But you almost certainly know all this: it’s part of what these days, in many parts of the world, we call “just being alive”.

The most confounding thing of all is that in a few years’ time, all this stupendous change will probably seem like not very much change at all. As Crocker points out, when you’re dealing with exponential growth, the distance from A to B looks huge until you get to point C, whereupon the distance between A and B looks like almost nothing; when you get to point D, the distance between B and C looks similarly tiny. One day, presumably, everything that has happened in the last 40 years will look like early throat-clearings — mere preparations for whatever the internet is destined to become. We will be the equivalents of the late-60s computer engineers, in their horn-rimmed glasses, brown suits, and brown ties, strange, period-costume characters populating some dimly remembered past.

The Terminal Bloomberg Professional Services

Creation Myth by Malcolm Gladwell

Xerox PARC, Apple, and the truth about innovation.

1.

In late 1979, a twenty-four-year-old entrepreneur paid a visit to a research center in Silicon Valley called Xerox PARC. He was the co-founder of a small computer startup down the road, in Cupertino. His name was Steve Jobs.

Xerox PARC was the innovation arm of the Xerox Corporation. It was, and remains, on Coyote Hill Road, in Palo Alto, nestled in the foothills on the edge of town, in a long, low concrete building, with enormous terraces looking out over the jewels of Silicon Valley. To the northwest was Stanford University’s Hoover Tower. To the north was Hewlett-Packard’s sprawling campus. All around were scores of the other chip designers, software firms, venture capitalists, and hardware-makers. A visitor to PARC, taking in that view, could easily imagine that it was the computer world’s castle, lording over the valley below—and, at the time, this wasn’t far from the truth. In 1970, Xerox had assembled the world’s greatest computer engineers and programmers, and for the next ten years they had an unparalleled run of innovation and invention. If you were obsessed with the future in the seventies, you were obsessed with Xerox PARC—which was why the young Steve Jobs had driven to Coyote Hill Road.

Apple was already one of the hottest tech firms in the country. Everyone in the Valley wanted a piece of it. So Jobs proposed a deal: he would allow Xerox to buy a hundred thousand shares of his company for a million dollars—its highly anticipated I.P.O. was just a year away—if PARC would “open its kimono.” A lot of haggling ensued. Jobs was the fox, after all, and PARC was the henhouse. What would he be allowed to see? What wouldn’t he be allowed to see? Some at PARC thought that the whole idea was lunacy, but, in the end, Xerox went ahead with it. One PARC scientist recalls Jobs as “rambunctious”—a fresh-cheeked, caffeinated version of today’s austere digital emperor. He was given a couple of tours, and he ended up standing in front of a Xerox Alto, PARC’s prized personal computer.

An engineer named Larry Tesler conducted the demonstration. He moved the cursor across the screen with the aid of a “mouse.” Directing a conventional computer, in those days, meant typing in a command on the keyboard. Tesler just clicked on one of the icons on the screen. He opened and closed “windows,” deftly moving from one task to another. He wrote on an elegant word-processing program, and exchanged e-mails with other people at PARC, on the world’s first Ethernet network. Jobs had come with one of his software engineers, Bill Atkinson, and Atkinson moved in as close as he could, his nose almost touching the screen. “Jobs was pacing around the room, acting up the whole time,” Tesler recalled. “He was very excited. Then, when he began seeing the things I could do onscreen, he watched for about a minute and started jumping around the room, shouting, ‘Why aren’t you doing anything with this? This is the greatest thing. This is revolutionary!’”

Xerox began selling a successor to the Alto in 1981. It was slow and underpowered—and Xerox ultimately withdrew from personal computers altogether. Jobs, meanwhile, raced back to Apple, and demanded that the team working on the company’s next generation of personal computers change course. He wanted menus on the screen. He wanted windows. He wanted a mouse. The result was the Macintosh, perhaps the most famous product in the history of Silicon Valley.

“If Xerox had known what it had and had taken advantage of its real opportunities,” Jobs said, years later, “it could have been as big as I.B.M. plus Microsoft plus Xerox combined—and the largest high-technology company in the world.”

This is the legend of Xerox PARC. Jobs is the Biblical Jacob and Xerox is Esau, squandering his birthright for a pittance. In the past thirty years, the legend has been vindicated by history. Xerox, once the darling of the American high-technology community, slipped from its former dominance. Apple is now ascendant, and the demonstration in that room in Palo Alto has come to symbolize the vision and ruthlessness that separate true innovators from also-rans. As with all legends, however, the truth is a bit more complicated.

2.

After Jobs returned from PARC, he met with a man named Dean Hovey, who was one of the founders of the industrial-design firm that would become known as IDEO. “Jobs went to Xerox PARC on a Wednesday or a Thursday, and I saw him on the Friday afternoon,” Hovey recalled. “I had a series of ideas that I wanted to bounce off him, and I barely got two words out of my mouth when he said, ‘No, no, no, you’ve got to do a mouse.’ I was, like, ‘What’s a mouse?’ I didn’t have a clue. So he explains it, and he says, ‘You know, [the Xerox mouse] is a mouse that cost three hundred dollars to build and it breaks within two weeks. Here’s your design spec: Our mouse needs to be manufacturable for less than fifteen bucks. It needs to not fail for a couple of years, and I want to be able to use it on Formica and my bluejeans.’ From that meeting, I went to Walgreens, which is still there, at the corner of Grant and El Camino in Mountain View, and I wandered around and bought all the underarm deodorants that I could find, because they had that ball in them. I bought a butter dish. That was the beginnings of the mouse.”

I spoke with Hovey in a ramshackle building in downtown Palo Alto, where his firm had started out. He had asked the current tenant if he could borrow his old office for the morning, just for the fun of telling the story of the Apple mouse in the place where it was invented. The room was the size of someone’s bedroom. It looked as if it had last been painted in the Coolidge Administration. Hovey, who is lean and healthy in a Northern California yoga-and-yogurt sort of way, sat uncomfortably at a rickety desk in a corner of the room. “Our first machine shop was literally out on the roof,” he said, pointing out the window to a little narrow strip of rooftop, covered in green outdoor carpeting. “We didn’t tell the planning commission. We went and got that clear corrugated stuff and put it across the top for a roof. We got out through the window.”

He had brought a big plastic bag full of the artifacts of that moment: diagrams scribbled on lined paper, dozens of differently sized plastic mouse shells, a spool of guitar wire, a tiny set of wheels from a toy train set, and the metal lid from a jar of Ralph’s preserves. He turned the lid over. It was filled with a waxlike substance, the middle of which had a round indentation, in the shape of a small ball. “It’s epoxy casting resin,” he said. “You pour it, and then I put Vaseline on a smooth steel ball, and set it in the resin, and it hardens around it.” He tucked the steel ball underneath the lid and rolled it around the tabletop. “It’s a kind of mouse.”

The hard part was that the roller ball needed to be connected to the housing of the mouse, so that it didn’t fall out, and so that it could transmit information about its movements to the cursor on the screen. But if the friction created by those connections was greater than the friction between the tabletop and the roller ball, the mouse would skip. And the more the mouse was used the more dust it would pick up off the tabletop, and the more it would skip. The Xerox PARC mouse was an elaborate affair, with an array of ball bearings supporting the roller ball. But there was too much friction on the top of the ball, and it couldn’t deal with dust and grime.

At first, Hovey set to work with various arrangements of ball bearings, but nothing quite worked. “This was the ‘aha’ moment,” Hovey said, placing his fingers loosely around the sides of the ball, so that they barely touched its surface. “So the ball’s sitting here. And it rolls. I attribute that not to the table but to the oldness of the building. The floor’s not level. So I started playing with it, and that’s when I realized: I want it to roll. I don’t want it to be supported by all kinds of ball bearings. I want to just barely touch it.”

The trick was to connect the ball to the rest of the mouse at the two points where there was the least friction—right where his fingertips had been, dead center on either side of the ball. “If it’s right at midpoint, there’s no force causing it to rotate. So it rolls.”

Hovey estimated their consulting fee at thirty-five dollars an hour; the whole project cost perhaps a hundred thousand dollars. “I originally pitched Apple on doing this mostly for royalties, as opposed to a consulting job,” he recalled. “I said, ‘I’m thinking fifty cents apiece,’ because I was thinking that they’d sell fifty thousand, maybe a hundred thousand of them.” He burst out laughing, because of how far off his estimates ended up being. ‘s pretty savvy. He said no. Maybe if I’d asked for a nickel, I would have been fine.”

3.

Here is the first complicating fact about the Jobs visit. In the legend of Xerox PARC, Jobs stole the personal computer from Xerox. But the striking thing about Jobs’s instructions to Hovey is that he didn’t want to reproduce what he saw at PARC. “You know, there were disputes around the number of buttons—three buttons, two buttons, one-button mouse,” Hovey went on. “The mouse at Xerox had three buttons. But we came around to the fact that learning to mouse is a feat in and of itself, and to make it as simple as possible, with just one button, was pretty important.”

So was what Jobs took from Xerox the idea of the mouse? Not quite, because Xerox never owned the idea of the mouse. The PARC researchers got it from the computer scientist Douglas Engelbart, at Stanford Research Institute, fifteen minutes away on the other side of the university campus. Engelbart dreamed up the idea of moving the cursor around the screen with a stand-alone mechanical “animal” back in the mid- nineteen-sixties. His mouse was a bulky, rectangular affair, with what looked like steel roller-skate wheels. If you lined up Engelbart’s mouse, Xerox’s mouse, and Apple’s mouse, you would not see the serial reproduction of an object. You would see the evolution of a concept.

The same is true of the graphical user interface that so captured Jobs’s imagination. Xerox PARC’s innovation had been to replace the traditional computer command line with onscreen icons. But when you clicked on an icon you got a pop-up menu: this was the intermediary between the user’s intention and the computer’s response. Jobs’s software team took the graphical interface a giant step further. It emphasized “direct manipulation.” If you wanted to make a window bigger, you just pulled on its corner and made it bigger; if you wanted to move a window across the screen, you just grabbed it and moved it. The Apple designers also invented the menu bar, the pull-down menu, and the trash can—all features that radically simplified the original Xerox PARC idea.

The difference between direct and indirect manipulation—between three buttons and one button, three hundred dollars and fifteen dollars, and a roller ball supported by ball bearings and a free-rolling ball—is not trivial. It is the difference between something intended for experts, which is what Xerox PARC had in mind, and something that’s appropriate for a mass audience, which is what Apple had in mind. PARC was building a personal computer. Apple wanted to build apopular computer.

In a recent study, “The Culture of Military Innovation,” the military scholar Dima Adamsky makes a similar argument about the so-called Revolution in Military Affairs. R.M.A. refers to the way armies have transformed themselves with the tools of the digital age—such as precision-guided missiles, surveillance drones, and real-time command, control, and communications technologies—and Adamsky begins with the simple observation that it is impossible to determine who invented R.M.A. The first people to imagine how digital technology would transform warfare were a cadre of senior military intellectuals in the Soviet Union, during the nineteen-seventies. The first country to come up with these high-tech systems was the United States. And the first country to use them was Israel, in its 1982 clash with the Syrian Air Force in Lebanon’s Bekaa Valley, a battle commonly referred to as “the Bekaa Valley turkey shoot.” Israel coördinated all the major innovations of R.M.A. in a manner so devastating that it destroyed nineteen surface-to-air batteries and eighty-seven Syrian aircraft while losing only a handful of its own planes.

That’s three revolutions, not one, and Adamsky’s point is that each of these strands is necessarily distinct, drawing on separate skills and circumstances. The Soviets had a strong, centralized military bureaucracy, with a long tradition of theoretical analysis. It made sense that they were the first to understand the military implications of new information systems. But they didn’t do anything with it, because centralized military bureaucracies with strong intellectual traditions aren’t very good at connecting word and deed.

The United States, by contrast, has a decentralized, bottom-up entrepreneurial culture, which has historically had a strong orientation toward technological solutions. The military’s close ties to the country’ high-tech community made it unsurprising that the U.S. would be the first to invent precision-guidance and next-generation command-and-control communications. But those assets also meant that Soviet-style systemic analysis wasn’t going to be a priority. As for the Israelis, their military culture grew out of a background of resource constraint and constant threat. In response, they became brilliantly improvisational and creative. But, as Adamsky points out, a military built around urgent, short-term “fire extinguishing” is not going to be distinguished by reflective theory. No one stole the revolution. Each party viewed the problem from a different perspective, and carved off a different piece of the puzzle.

In the history of the mouse, Engelbart was the Soviet Union. He was the visionary, who saw the mouse before anyone else did. But visionaries are limited by their visions. “Engelbart’s self-defined mission was not to produce a product, or even a prototype; it was an open-ended search for knowledge,” Matthew Hiltzik writes, in “Dealers of Lightning” (1999), his wonderful history of Xerox PARC. “Consequently, no project in his lab ever seemed to come to an end.” Xerox PARC was the United States: it was a place where things got made. “Xerox created this perfect environment,” recalled Bob Metcalfe, who worked there through much of the nineteen-seventies, before leaving to found the networking company 3Com. “There wasn’t any hierarchy. We built out our own tools. When we needed to publish papers, we built a printer. When we needed to edit the papers, we built a computer. When we needed to connect computers, we figured out how to connect them. We had big budgets. Unlike many of our brethren, we didn’t have to teach. We could just research. It was heaven.”

But heaven is not a good place to commercialize a product. “We built a computer and it was a beautiful thing,” Metcalfe went on. “We developed our computer language, our own display, our own language. It was a gold-plated product. But it cost sixteen thousand dollars, and it needed to cost three thousand dollars.” For an actual product, you need threat and constraint—and the improvisation and creativity necessary to turn a gold-plated three-hundred-dollar mouse into something that works on Formica and costs fifteen dollars. Apple was Israel.

Xerox couldn’t have been I.B.M. and Microsoft combined, in other words. “You can be one of the most successful makers of enterprise technology products the world has ever known, but that doesn’t mean your instincts will carry over to the consumer market,” the tech writer Harry McCracken recently wrote. “They’re really different, and few companies have ever been successful in both.” He was talking about the decision by the networking giant Cisco System, this spring, to shut down its Flip camera business, at a cost of many hundreds of millions of dollars. But he could just as easily have been talking about the Xerox of forty years ago, which was one of the most successful makers of enterprise technology the world has ever known. The fair question is whether Xerox, through its research arm in Palo Alto, found a better way to be Xerox—and the answer is that it did, although that story doesn’t get told nearly as often.

4.

One of the people at Xerox PARC when Steve Jobs visited was an optical engineer named Gary Starkweather. He is a solid and irrepressibly cheerful man, with large, practical hands and the engineer’s gift of pretending that what is impossibly difficult is actually pretty easy, once you shave off a bit here, and remember some of your high-school calculus, and realize that the thing that you thought should go in left to right should actually go in right to left. Once, before the palatial Coyote Hill Road building was constructed, a group that Starkweather had to be connected to was moved to another building, across the Foothill Expressway, half a mile away. There was no way to run a cable under the highway. So Starkweather fired a laser through the air between the two buildings, an improvised communications system that meant that, if you were driving down the Foothill Expressway on a foggy night and happened to look up, you might see a mysterious red beam streaking across the sky. When a motorist drove into the median ditch, “we had to turn it down,” Starkweather recalled, with a mischievous smile.

Lasers were Starkweather’s specialty. He started at Xerox’s East Coast research facility in Webster, New York, outside Rochester. Xerox built machines that scanned a printed page of type using a photographic lens, and then printed a duplicate. Starkweather’s idea was to skip the first step—to run a document from a computer directly into a photocopier, by means of a laser, and turn the Xerox machine into a printer. It was a radical idea. The printer, since Gutenberg, had been limited to the function of re-creation: if you wanted to print a specific image or letter, you had to have a physical character or mark corresponding to that image or letter. What Starkweather wanted to do was take the array of bits and bytes, ones and zeros that constitute digital images, and transfer them straight into the guts of a copier. That meant, at least in theory, that he could print anything.

“One morning, I woke up and I thought, Why don’t we just print something out directly?” Starkweather said. “But when I flew that past my boss he thought it was the most brain-dead idea he had ever heard. He basically told me to find something else to do. The feeling was that lasers were too expensive. They didn’t work that well. Nobody wants to do this, computers aren’t powerful enough. And I guess, in my naïveté, I kept thinking, He’s just not right—there’s something about this I really like. It got to be a frustrating situation. He and I came to loggerheads over the thing, about late 1969, early 1970. I was running my experiments in the back room behind a black curtain. I played with them when I could. He threatened to lay off my people if I didn’t stop. I was having to make a decision: do I abandon this, or do I try and go up the ladder with it?”

Then Starkweather heard that Xerox was opening a research center in Palo Alto, three thousand miles away from its New York headquarters. He went to a senior vice-president of Xerox, threatening to leave for I.B.M. if he didn’t get a transfer. In January of 1971, his wish was granted, and, within ten months, he had a prototype up and running.

Starkweather is retired now, and lives in a gated community just north of Orlando, Florida. When we spoke, he was sitting at a picnic table, inside a screened-in porch in his back yard. Behind him, golfers whirred by in carts. He was wearing white chinos and a shiny black short-sleeved shirt, decorated with fluorescent images of vintage hot rods. He had brought out two large plastic bins filled with the artifacts of his research, and he spread the contents on the table: a metal octagonal disk, sketches on lab paper, a black plastic laser housing that served as the innards for one of his printers.

“There was still a tremendous amount of opposition from the Webster group, who saw no future in computer printing,” he went on. “They said, ‘I.B.M. is doing that. Why do we need to do that?’ and so forth. Also, there were two or three competing projects, which I guess I have the luxury of calling ridiculous. One group had fifty people and another had twenty. I had two.” Starkweather picked up a picture of one of his in-house competitors, something called an “optical carriage printer.” It was the size of one of those modular Italian kitchen units that you see advertised in fancy design magazines. “It was an unbelievable device,” he said, with a rueful chuckle. “It had a ten-inch drum, which turned at five thousand r.p.m., like a super washing machine. It had characters printed on its surface. I think they only ever sold ten of them. The problem was that it was spinning so fast that the drum would blow out and the characters would fly off. And there was only this one lady in Troy, New York, who knew how to put the characters on so that they would stay.

“So we finally decided to have what I called a fly-off. There was a full page of text—where some of them were non-serif characters, Helvetica, stuff like that—and then a page of graph paper with grid lines, and pages with pictures and some other complex stuff—and everybody had to print all six pages. Well, once we decided on those six pages, I knew I’d won, because I knew there wasn’t anything I couldn’t print. Are you kidding? If you can translate it into bits, I can print it. Some of these other machines had to go through hoops just to print a curve. A week after the fly-off, they folded those other projects. I was the only game in town.” The project turned into the Xerox 9700, the first high-speed, cut-paper laser printer in the world.

5.

In one sense, the Starkweather story is of a piece with the Steve Jobs visit. It is an example of the imaginative poverty of Xerox management. Starkweather had to hide his laser behind a curtain. He had to fight for his transfer to PARC. He had to endure the indignity of the fly-off, and even then Xerox management remained skeptical. The founder of PARC, Jack Goldman, had to bring in a team from Rochester for a personal demonstration. After that, Starkweather and Goldman had an idea for getting the laser printer to market quickly: graft a laser onto a Xerox copier called the 7000. The 7000 was an older model, and Xerox had lots of 7000s sitting around that had just come off lease. Goldman even had a customer ready: the Lawrence Livermore laboratory was prepared to buy a whole slate of the machines. Xerox said no. Then Starkweather wanted to make what he called a photo-typesetter, which produced camera-ready copy right on your desk. Xerox said no. “I wanted to work on higher-performance scanners,” Starkweather continued. “In other words, what if we print something other than documents? For example, I made a high-resolution scanner and you could print on glass plates.” He rummaged in one of the boxes on the picnic table and came out with a sheet of glass, roughly six inches square, on which a photograph of a child’s face appeared. The same idea, he said, could have been used to make “masks” for the semiconductor industry—the densely patterned screens used to etch the designs on computer chips. “No one would ever follow through, because Xerox said, ‘Now you’re in Intel’s market, what are you doing that for?’ They just could not seem to see that they were in the information business. This”—he lifted up the plate with the little girl’s face on it— “is a copy. It’s just not a copy of an office document.” But he got nowhere. “Xerox had been infested by a bunch of spreadsheet experts who thought you could decide every product based on metrics. Unfortunately, creativity wasn’t on a metric.”

A few days after that afternoon in his back yard, however, Starkweather e-mailed an addendum to his discussion of his experiences at PARC. “Despite all the hassles and risks that happened in getting the laser printer going, in retrospect the journey was that much more exciting,” he wrote. “Often difficulties are just opportunities in disguise.” Perhaps he felt that he had painted too negative a picture of his time at Xerox, or suffered a pang of guilt about what it must have been like to be one of those Xerox executives on the other side of the table. The truth is that Starkweather was a difficult employee. It went hand in hand with what made him such an extraordinary innovator. When his boss told him to quit working on lasers, he continued in secret. He was disruptive and stubborn and independent-minded—and he had a thousand ideas, and sorting out the good ideas from the bad wasn’t always easy. Should Xerox have put out a special order of laser printers for Lawrence Livermore, based on the old 7000 copier? In “Fumbling the Future: How Xerox Invented, Then Ignored, the First Personal Computer” (1988)—a book dedicated to the idea that Xerox was run by the blind—Douglas Smith and Robert Alexander admit that the proposal was hopelessly impractical: “The scanty Livermore proposal could not justify the investment required to start a laser printing business…. How and where would Xerox manufacture the laser printers? Who would sell and service them? Who would buy them and why?” Starkweather, and his compatriots at Xerox PARC, weren’t the source of disciplined strategic insights. They were wild geysers of creative energy.

The psychologist Dean Simonton argues that this fecundity is often at the heart of what distinguishes the truly gifted. The difference between Bach and his forgotten peers isn’t necessarily that he had a better ratio of hits to misses. The difference is that the mediocre might have a dozen ideas, while Bach, in his lifetime, created more than a thousand full-fledged musical compositions. A genius is a genius, Simonton maintains, because he can put together such a staggering number of insights, ideas, theories, random observations, and unexpected connections that he almost inevitably ends up with something great. “Quality,” Simonton writes, is “a probabilistic function of quantity.”

Simonton’s point is that there is nothing neat and efficient “The more successes there are,” he says, “the more failures there are as well” — meaning that the person who had far more ideas than the rest of us will have far more bad ideas than the rest of us, too. This is why managing the creative process is so difficult. The making of the classic Rolling Stones album “Exile on Main Street” was an ordeal, Keith Richards writes in his new memoir, because the band had too many ideas. It had to fight from under an avalanche of mediocrity: “Head in the Toilet Blues,” “Leather Jackets,” “Windmill,” “I Was Just a Country Boy,” “Bent Green Needles,” “Labour Pains,” and “Pommes de Terre”—the last of which Richards explains with the apologetic, “Well, we were in France at the time.”

At one point, Richards quotes a friend, Jim Dickinson, remembering the origins of the song “Brown Sugar”:

I watched Mick write the lyrics. . . . He wrote it down as fast as he could move his hand. I’d never seen anything like it. He had one of those yellow legal pads, and he’d write a verse a page, just write a verse and then turn the page, and when he had three pages filled, they started to cut it. It was amazing.

Richards goes on to marvel, “It’s unbelievable how prolific he was.” Then he writes, “Sometimes you’d wonder how to turn the fucking tap off. The odd times he would come out with so many lyrics, you’re crowding the airwaves, boy.” Richards clearly saw himself as the creative steward of the Rolling Stones (only in a rock-and-roll band, by the way, can someone like Keith Richards perceive himself as the responsible one), and he came to understand that one of the hardest and most crucial parts of his job was to “turn the fucking tap off,” to rein in Mick Jagger’s incredible creative energy.

The more Starkweather talked, the more apparent it became that his entire career had been a version of this problem. Someone was always trying to turn his tap off. But someone had to turn his tap off: the interests of the innovator aren’t perfectly aligned with the interests of the corporation. Starkweather saw ideas on their own merits. Xerox was a multinational corporation, with shareholders, a huge sales force, and a vast corporate customer base, and it needed to consider every new idea within the context of what it already had.

Xerox’s managers didn’t always make the right decisions when they said no to Starkweather. But he got to PARC, didn’t he? And Xerox, to its great credit, had a PARC—a place where, a continent away from the top managers, an engineer could sit and dream, and get every purchase order approved, and fire a laser across the Foothill Expressway if he was so inclined. Yes, he had to pit his laser printer against lesser ideas in the contest. But he won the contest. And, the instant he did, Xerox cancelled the competing projects and gave him the green light.

“I flew out there and gave a presentation to them on what I was looking at,” Starkweather said of his first visit to PARC. “They really liked it, because at the time they were building a personal computer, and they were beside themselves figuring out how they were going to get whatever was on the screen onto a sheet of paper. And when I showed them how I was going to put prints on a sheet of paper it was a marriage made in heaven.” The reason Xerox invented the laser printer, in other words, is that it invented the personal computer. Without the big idea, it would never have seen the value of the small idea. If you consider innovation to be efficient and ideas precious, that is a tragedy: you give the crown jewels away to Steve Jobs, and all you’re left with is a printer. But in the real, messy world of creativity, giving away the thing you don’t really understand for the thing that you do is an inevitable tradeoff.

“When you have a bunch of smart people with a broad enough charter, you will always get something good out of it,” Nathan Myhrvold, formerly a senior executive at Microsoft, argues. “It’s one of the best investments you could possibly make—but only if you chose to value it in terms of successes. If you chose to evaluate it in terms of how many times you failed, or times you could have succeeded and didn’t, then you are bound to be unhappy. Innovation is an unruly thing. There will be some ideas that don’t get caught in your cup. But that’s not what the game is about. The game is what you catch, not what you spill.”

In the nineteen-nineties, Myhrvold created a research laboratory at Microsoft modelled in part on what Xerox had done in Palo Alto in the nineteen-seventies, because he considered PARC a triumph, not a failure. “Xerox did research outside their business model, and when you do that you should not be surprised that you have a hard time dealing with it—any more than if some bright guy at Pfizer wrote a word processor. Good luck to Pfizer getting into the word-processing business. Meanwhile, the thing that they invented that was similar to their own business—a really big machine that spit paper —they made a lot of money on it.” And so they did. Gary Starkweather’s laser printer made billions for Xerox. It paid for every other single project at Xerox PARC, many times over.

6.

In 1988, Starkweather got a call from the head of one of Xerox’s competitors, trying to lure him away. It was someone whom he had met years ago. “The decision was painful,” he said. “I was a year from being a twenty-five-year veteran of the company. I mean, I’d done enough for Xerox that unless I burned the building down they would never fire me. But that wasn’t the issue. It’s about having ideas that are constantly squashed. So I said, ‘Enough of this,’ and I left.”

He had a good many years at his new company, he said. It was an extraordinarily creative place. He was part of decision-making at the highest level. “Every employee from technician to manager was hot for the new, exciting stuff,” he went on. “So, as far as buzz and daily environment, it was far and away the most fun I’ve ever had.” But it wasn’t perfect. “I remember I called in the head marketing guy and I said, ‘I want you to give me all the information you can come up with on when people buy one of our products—what software do they buy, what business are they in—so I can see the model of how people are using the machines.’ He looked at me and said, ‘I have no idea about that.’ ” Where was the rigor? Then Starkweather had a scheme for hooking up a high-resolution display to one of his new company’s computers. “I got it running and brought it into management and said, ‘Why don’t we show this at the tech expo in San Francisco? You’ll be able to rule the world.’ They said, ‘I don’t know. We don’t have room for it.’ It was that sort of thing. It was like me saying I’ve discovered a gold mine and you saying we can’t afford a shovel.”