The one piece of data that could actually shed light on your job and AI

The Crucial Data Point Illuminating AI’s Impact on Jobs

The rise of artificial intelligence has sparked endless debates about its effect on employment. Will AI eliminate millions of jobs, or will it augment human work and spur new opportunities? Pundits and policymakers offer bold predictions, but most lack a solid foundation. Aggregate employment statistics, such as unemployment rates or job growth in broad sectors, provide only a fuzzy picture. They reveal net outcomes but obscure the underlying dynamics: which specific tasks within jobs are being automated, augmented, or created anew.

To truly understand AI’s footprint on the workforce, researchers need granular data on one key metric: the share of time workers spend on tasks exposed to AI automation. This “task exposure” measure tracks how workers allocate their hours across activities and how that allocation shifts over time. Without longitudinal data capturing these changes, claims about AI’s job effects remain speculative. Economists have long advocated a task-based framework for analyzing technological change, yet the requisite data remains scarce.

Consider the evolution of work analysis. In the early 2000s, economists David Autor, Frank Levy, and Richard Murnane introduced the concept of tasks to explain why computers hollowed out routine middle-skill jobs while boosting demand for nonroutine cognitive and manual work. This polarization of the labor market became a hallmark of the digital age. Today, with generative AI tools like ChatGPT capable of handling language-based tasks from writing to coding, a similar task-centric lens is essential. But static occupational classifications fail to capture how AI reshapes daily workflows.

The US Bureau of Labor Statistics (BLS) maintains the Occupational Information Network, or ONET, a database describing the tasks, skills, and abilities required for hundreds of occupations. ONET rates tasks on scales for importance and frequency, enabling researchers to score jobs by their exposure to AI. For instance, studies using O*NET have found that AI-vulnerable occupations, such as those heavy in predictable analytic tasks, grew 3.6 percentage points faster from 2010 to 2018 than less-exposed ones. More recent analyses, like those from the Institute for Public Policy Research, highlight risks to clerical and administrative roles.

Yet O*NET’s snapshots are updated sporadically, typically every few years, and rely on expert surveys rather than direct worker reports. They do not reveal how task time allocations evolve dynamically. A lawyer might spend 40 percent of her day drafting contracts one year and 20 percent the next, as AI tools take over routine clauses. Aggregate job counts miss this granularity.

Enter time-use surveys, the gold standard for measuring task composition. The American Time Use Survey (ATUS), conducted annually by the BLS since 2003, asks a representative sample of about 10,000 Americans to log their activities in five-minute increments over a full day. Respondents categorize time into broad activities like “work” or “leisure,” with secondary probes for work-specific tasks such as “emails” or “meetings.” ATUS data shows that professional workers now spend about 20 percent of their time on computer-based tasks, up from 10 percent two decades ago.

While valuable, ATUS has limitations. Its sample size is small for rare occupations, and it captures a single day, introducing noise from atypical schedules. Critically, it lacks longitudinal tracking of the same individuals or consistent task codes over time. Pre-AI baselines are thus hard to establish, making it tricky to attribute shifts to specific technologies.

Other nations offer glimpses of what richer data could yield. The UK’s Understanding Society survey, a longitudinal panel of over 40,000 households since 2009, includes periodic modules on work tasks. Analysis of this data reveals that higher-paid workers have increased time on “thinking” tasks while cutting “doing” tasks, a pattern accelerating with AI. In Denmark, the Work Environment and Health surveys track task changes biennially. Such datasets allow causal inference, like linking AI adoption to reduced time on automatable tasks.

In the US, private sources fill some gaps. The Nielsen Company collects detailed time diaries from consumer panels, though access is restricted and not occupation-focused. Firm-level data from internal HR systems or software logs, such as Microsoft Viva Insights, track employee activities at scale but remain proprietary.

Researchers are pushing for better public data. Economists Daron Acemoglu and Pascual Restrepo call for augmenting ATUS with AI-specific modules, asking workers about tools like large language models. The BLS could mandate task diaries in its monthly employment surveys or create a dedicated AI impact panel. International efforts, like the OECD’s task-based frameworks, underscore the need for harmonized global standards.

Emerging evidence hints at AI’s effects. A 2023 study by economists John Van Reenen and Xiting Wu, using O*NET and ATUS, found that AI-exposed workers shifted toward more creative tasks post-ChatGPT. Occupational growth in AI-vulnerable fields continues, suggesting complementarity over displacement so far. But without comprehensive time-use data spanning the AI boom, these insights are provisional.

Policymakers must prioritize data infrastructure. Investments in longitudinal surveys could guide retraining programs, targeting workers whose tasks face disruption. For businesses, granular metrics inform AI deployment strategies, maximizing productivity gains.

Ultimately, the path to clarity lies in tracking task time exposure longitudinally. This single piece of data would demystify AI’s labor market effects, separating hype from reality and enabling proactive adaptation.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.