1 The Rise and Fall of Pseudo-Productivity In the summer of 1995, Leslie Moonves, the newly appointed head of entertainment for CBS, was wandering the halls of the network''s vast Television City headquarters. He was not happy with what he saw: it was 3:30 p.m. on a Friday, and the office was three quarters empty. As the media journalist Bill Carter reports in Desperate Networks, his 2006 book about the television industry during this period, a frustrated Moonves sent a heated memo about the empty office to his employees. "Unless anybody hasn''t noticed, we''re in third place [in the ratings]," he wrote. "My guess is that at ABC and NBC they''re still working at 3:30 on Friday. This will no longer be tolerated.
" On first encounter, this vignette provides a stereotypical case study about the various ways the knowledge sector came to think about productivity during the twentieth century: "Work" is a vague thing that employees do in an office. More work creates better results than less. It''s a manager''s job to ensure enough work is getting done, because without this pressure, lazy employees will attempt to get away with the bare minimum. The most successful companies have the hardest workers. But how did we develop these beliefs? We''ve heard them enough times to convince ourselves that they''re probably true, but a closer look reveals a more complicated story. It doesn''t take much probing to discover that in the knowledge work environment, when it comes to the basic goal of getting things done, we actually know much less than we''re letting on . What Does "Productivity" Mean? As the full extent of our culture''s growing weariness with "productivity" became increasingly apparent in recent years, I decided to survey my readers about the topic. My goal was to nuance my understanding of what was driving this shift.
Ultimately, close to seven hundred people, almost all knowledge workers, participated in my informal study. My first substantive question was meant to be easy; a warm-up of sorts: "In your particular professional field, how would most people define ''productivity'' or ''being productive''?" The responses I received to this initial query, however, surprised me. The issue was less what they said than what they didn''t. By far the most common style of answer simply listed the types of things the respondent did in their job. "Producing content and services for the benefit of our member organizations," replied an executive named Michael. "The ability to produce [sermons] while simultaneously caring for your flock via personal visits," said a pastor named Jason. A researcher named Marianna pointed to "attending meetings . running lab experiments .
and producing peer-reviewed articles." An engineering director named George defined productivity to be "doing what you said you would do." None of these answers included specific goals to meet, or performance measures that could differentiate between doing a job well versus badly. When quantity was mentioned, it tended to be in the general sense that more is always better. (Productivity is "working all the time," explained an exhausted postdoc named Soph.) As I read through more of my surveys, an unsettling revelation began to emerge: for all of our complaining about the term, knowledge workers have no agreed-upon definition of what "productivity" even means. This vagueness extends beyond the self-reflection of individuals; it''s also reflected in academic treatments of this topic. In 1999, the management theorist Peter Drucker published an influential paper titled "Knowledge-Worker Productivity: The Biggest Challenge.
" Early in the article, Drucker admits that "work on the productivity of the knowledge worker has barely begun." In an attempt to rectify this reality, he goes on to list six "major factors" that influence productivity in the knowledge sector, including clarity about tasks and a commitment to continuous learning and innovation. As in my survey responses, all of this is just him talking around the issue-identifying things that might support productive work in a general sense, not providing specific properties to measure, or processes to improve. A few years ago, I interviewed a distinguished Babson College management professor named Tom Davenport for an article. I was interested in Davenport because, earlier in his career, he was one of the few academics I could find who seriously attempted to study productivity in the knowledge sector, culminating in his 2005 book, Thinking for a Living: How to Get Better Performance and Results from Knowledge Workers. Davenport ultimately became frustrated with the difficulty of making meaningful progress on this topic and moved on to more rewarding areas. "In most cases, people don''t measure the productivity of knowledge workers," he explained. "And when we do, we do it in really silly ways, like how many papers do academics produce, regardless of quality.
We are still in the quite early stages." Davenport has written or edited twenty-five books. He told me that Thinking for a Living was the worst selling of them all. It''s hard to overemphasize how unusual it is that an economic sector as large as knowledge work lacks useful standard definitions of productivity. In most every other area of our economy, not only is productivity a well-defined concept, but it''s often central to how work unfolds. Indeed, much of the astonishing economic growth fueling modernity can be attributed to a more systematic treatment of this fundamental idea. Early uses of the term can be traced back to agriculture, where its meaning is straightforward. For a farmer, the productivity of a given parcel of land can be measured by the amount of food the land produces.
This ratio of output to input provides a compass of sorts that allows farmers to navigate the possible ways to cultivate their crops: systems that work better will produce measurably more bushels per acre. This use of a clear productivity metric to help improve clearly defined processes might sound obvious, but the introduction of this approach enabled explosive leaps forward in efficiency. In the seventeenth century, for example, it was exactly this type of metric-driven experimentation that led to the Norfolk four-course system of planting, which eliminated the need to leave fields fallow. This in turn made many farmers suddenly much more productive, helping to spur the British agricultural revolution. As the Industrial Revolution began to emanate outward from Britain in the eighteenth century, early capitalists adapted similar notions of productivity from farm fields to their mills and factories. As with growing crops, the key idea was to measure the amount of output produced for a given amount of input and then experiment with different processes for improving this value. Farmers care about bushels per acre, while factory owners care about automobiles produced per paid hour of labor. Farmers might improve their metric by using a smarter crop rotation system, while factory owners might improve their metric by shifting production to a continuous-motion assembly line.
In these examples, different types of things are being produced, but the force driving changes in methods is the same: productivity. There was, of course, a well-known human cost to this emphasis on measurable improvement. Working on an assembly line is repetitive and boring, and the push for individuals to be more efficient in their every action creates conditions that promote injury and exhaustion. But the ability for productivity to generate astonishing economic growth in these sectors swept aside most such concerns. Assembly lines are dreary for workers, but when Henry Ford switched his factory in Highland Park, Michigan, to this method in 1913, the labor-hours required to produce a Model T dropped from 12.5 to around 1.5-a staggering improvement. By the end of the decade, half of the cars in the United States had been produced by the Ford Motor Company.
These rewards were too powerful to resist. The story of economic growth in the modern Western world is in many ways a story about the triumph of productivity thinking. But then the knowledge sector emerged as a major force in the mid-twentieth century, and this profitable dependence on crisp, quantitative, formal notions of productivity all but vanished. There was, as it turns out, a good reason for this abandonment: the old notions of productivity that worked so well in farming and manufacturing didn''t seem to apply to this new style of cognitive work. One problem is the variability of effort. When the infamous efficiency consultant Frederick Winslow Taylor was hired to improve productivity at Bethlehem Steel in the early twentieth century, he could assume that each worker at the foundry was responsible for a single, clear task, like shoveling slag iron. This made it possible for him to precisely measure their output per unit of time and seek ways to improve this metric. In this particular example, Taylor ended up designing a better shovel for the foundry workers that carefully balanced the desire to move more iron per scoop while also avoiding unproductive overexertion.
(In case you''re wondering, he determined the optimal shovel load was twenty-one pounds.) In knowledge work, by contrast, individuals are often wrangling complicated and constantly shifting workloads. You might be working on a client report at the same time that you''re gathering testimonials for the company website and organizing an office party, all the while updating a conflict of interest statement that human resources just emailed you about. In this setting, there''s no clear single output to track. And even if you do wade through this swamp of activity to identify the work that ma.