Palo Alto, CA
I am a software engineer at Google, where I've been since March 2014.
I now work mostly on concurrent programming issues, both generally, and
focussed on Android.
I am an ACM Fellow, and a past Chair of ACM SIGPLAN (2001-2003).
Until late 2017 I chaired the ISO C++ Concurrency Study Group (WG21/SG1),
where I continue to actively participate.
A list of my selected publications is here.
Where I've been
In the past I've worked or studied at:
- HP Labs (Researcher, Research Manager)
- SGI (Software Engineer)
- Xerox PARC (Researcher)
- Rice University (Assistant and Associate Professor)
- University of Washington (Assistant Professor)
- Cornell University (graduate student)
- University of Washington (undergraduate student)
Things I've worked on more recently:
- I implemented the arithmetic evaluation engine for the default
Android Calculator. It avoids cumulative errors, and provides
arbitrarily scrollable, always accurate, results.
- Understanding how to program systems with non-volatile
byte addressable memory. We expect that such memory will become
far more common, and it is not yet clear if and how it will impact our
basic programming model.
- Helping to uncover and remove obstacles to writing
reliable multithreaded code.
Unfortunately, programming language definitions,
and sometimes even computer architecture specifications, have historically been so
vague that they couldn't possibly serve as a basis for teaching
programmers, or as reasonable guidelines for compiler writers.
I participated in the revision of the
Java "memory model", and more recently led a successful effort
to properly define shared variable semantics in C and C++.
This was part of a larger effort to add thread support to those
- Understanding the compiler optimization consequences of the
preceding work. Some time-honored optimization techniques
(e.g. certain flavors of register promotion and vectorization) are now
clearly wrong in light of the preceding work, and were arguably
wrong all along. Other techniques that previously suffered from uncertain
legitimacy are now clearly correct.
Alternative approaches to shared memory programming that simplify
(or perhaps make possible) the programmer's job. I participate in
the C++ transactional memory standardization effort (WG21/SG5).
I have worked on "always on" data-race detection.
Interest Areas and Past Projects
- Conservative Garbage Collection
This work was started at Rice University, where it
grew out of work on the
implementation of the Russell programming language, which was
jointly developed with Alan Demers.
It was developed substantially further at Xerox PARC, SGI, and HP,
with the help
of many other contributors.
It resulted in a generally available and widely used
garbage collector library,
as well as a number of publications.
I still maintain the mailing list, etc., but I am only occasionally
involved in maintaining the code base.
- Multiprocessor Synchronization Algorithms
I have worked on fast lock implementations for Java virtual
machines, and I'm generally interested in fast multiprocessor
synchronization. I coauthored a paper on
practical implementation of monitor locks without hardware support.
libatomic_ops, which was useful at the time,
and helped us to avoid some
of its mistakes in the later design of C++11 atomics.
- Constructive Real Arithmetic
- Together with Corky Cartwright, Vernon Lee, and others,
I explored practical implementations of "exact" real computer arithmetic.
Numbers are represented exactly internal to the computer, in a form
that allows evaluation to any requested precision for display.
This resulted in several papers and a sequence of implementations.
The most recent of these implementations,
now also 15 years old, recently became the basis of the arithmetic
implementation in the default Android Calculator.
- Ropes or Scalable Strings
- The Xerox Cedar environment relies heavily on a scalable implementation
of strings, i.e. sequences of characters represented as trees.
(The Cedar implementation was
developed by Russ Atkinson, Michael Plass, and others. Similar ideas
were also developed elsewhere.) This idea simplifies many
interesting software systems, but hasn't propagated very far.
I reimplemented it both in C at Xerox (the cord package, now a part of the
garbage collector distribution) and in a very different form for C++ at
SGI (the "rope" package in the
- The Russell Programming Language
At Cornell, the University of Washington, and at Rice, I worked
on the semantics and implementation of a polymorphically-typed
programming language called Russell. A rather dated
implementation can still be found
The language was mostly designed by Donahue and Demers at Cornell.
Unfortunately, some of the good ideas from this and related designs
(e.g."templates" with real separate type checking and compilation)
still haven't completely made it into mainstream programming languages.
(Fortunately, neither did the bad ideas.)
Slides from some recent talks
Using weakly ordered C++ atomics correctly, Sept. 21, 2016.
Myths and Misconceptions about Threads, SPAA 2015 keynote talk, June 13, 2015.
Transactional Memory in C++, TRANSACT 2015 invited talk, June 16, 2015.
The C11 and C++11/14 Memory Model,
Lockheed, Mar. 3, 2015. (More of a tutorial than other recent talks.)
Putting Threads on a Solid Foundation: Some Remaining Issues,
Outlawing Ghosts: Avoiding Out-Of-Thin-Air Results,
Nondeterminism is Unavoidable,
But Data Races are Pure Evil,
RACES'12 Workshop, October 21, 2012.
Can Seqlocks Get Along with Programming Language Memory Models?,
Beijing, China, June 16, 2012.
Threads and Shared Variables in C++11 and elsewhere,
ACM San Francisco Bay Area
Professional Chapter, April 18, 2012.
Threads and Shared Variables in C++11
, GoingNative 2012.
How to Miscompile Programs with "Benign" Data Races, HotPar 2011.
Performance Implications of Fence-Based Memory Models, MSPC 2011.
Threads and Shared Variables in C++0x, BoostCon 2011.
Programming Language Memory Models: What do Shared Variables Mean? (2010, variants given at UC Berkeley (Oct.), UCSB (Sept.), CMU (March), a
revised version given at ECOOP 2011)
Semantics of Shared Variables and Synchronization a.k.a. Memory Models
(3.5 hour tutorial presented with Sarita Adve at PLDI, June 2010)
Transactional Memory Should be an Implementation Technique, Not a Programming Interface (Presented at HotPar09 workshop)
Reordering Constraints for Pthread-Style Locks (PPoPP 07 slides)
Finalization Should Not Be Based on Reachability (ISMM 06 Wild & Crazy Ideas session)
Finalizers, Threads, and the Java Memory Model
Threads Cannot be Implemented as a Library (PLDI 2005)
An Almost Non-Blocking Stack (PODC 2004)
BDWGC tutorial (ISMM 2004)
The Space Cost of Lazy Reference Counting (POPL 2004)
Performance of Non-Moving Garbage Collectors
Destructors, Finalizers, and Synchronization (POPL 2003)
Boehm family personal home page.