Ok

En poursuivant votre navigation sur ce site, vous acceptez l'utilisation de cookies. Ces derniers assurent le bon fonctionnement de nos services. En savoir plus.

samedi, 25 mai 2019

Algorithmic Governance and Political Legitimacy

intelligence-artificielle.jpg

Algorithmic Governance and Political Legitimacy

Ex: https://americanaffairesjournal.org

In ever more areas of life, algorithms are coming to substitute for judgment exercised by identifiable human beings who can be held to account. The rationale offered is that automated decision-making will be more reliable. But a further attraction is that it serves to insulate various forms of power from popular pressures.

Our readiness to acquiesce in the conceit of authorless control is surely due in part to our ideal of procedural fairness, which de­mands that individual discretion exercised by those in power should be replaced with rules whenever possible, because authority will inevitably be abused. This is the original core of liberalism, dating from the English Revolution.

Mechanized judgment resembles liberal proceduralism. It relies on our habit of deference to rules, and our suspicion of visible, personified authority. But its effect is to erode precisely those pro­cedural liberties that are the great accomplishment of the liberal tradition, and to place authority beyond scrutiny. I mean “authori­ty” in the broadest sense, including our interactions with outsized commercial entities that play a quasi-governmental role in our lives. That is the first problem. A second problem is that decisions made by algorithm are often not explainable, even by those who wrote the algorithm, and for that reason cannot win rational assent. This is the more fundamental problem posed by mechanized decision-making, as it touches on the basis of political legitimacy in any liberal regime.

I hope that what follows can help explain why so many people feel angry, put-upon, and powerless. And why so many, in expressing their anger, refer to something called “the establishment,” that shadowy and pervasive entity. In this essay I will be critiquing both algorithmic governance and (more controversially) the tenets of pro­gressive governance that make these digital innovations attractive to managers, bureaucrats, and visionaries. What is at stake is the qual­itative character of institutional authority—how we experience it.

blackbox.gifIn The Black Box Society: The Hidden Algorithms That Control Money and Information, University of Maryland law professor Frank Pasquale elaborates in great detail what others have called “platform capitalism,” emphasizing its one-way mirror quality. In­creasingly, every aspect of our lives—our movements through space, our patterns of consumption, our affiliations, our intellectual habits and political leanings, our linguistic patterns, our susceptibility to various kinds of pitches, our readiness to cave in minor disputes, our sexual predilections and ready inferences about the state of our marriage—are laid bare as data to be collected by companies who, for their own part, guard with military-grade secrecy the algorithms by which they use this information to determine our standing in the reputational economy. The classic stories that have been with us for decades, of someone trying to correct an error made by one of the credit rating agencies and finding that the process is utterly opaque and the agencies unaccountable, give us some indication of the kind of world that is being constructed for us.

What Was Self-Government?

“Children in their games are wont to submit to rules which they have themselves established, and to punish misdemeanors which they have themselves defined.” Thus did Tocqueville marvel at Americans’ habit of self-government, and the temperament it both required and encouraged from a young age. “The same spirit,” he said, “pervades every act of social life.”

Writing recently in the Atlantic, Yoni Appelbaum notes the sheer bulk of voluntary associations that once took up the hours and days of Americans, from labor unions and trade associations to mutual insurers, fraternal organizations, and volunteer fire departments. Many of these “mirrored the federal government in form: Local chapters elected representatives to state-level gatherings, which sent delegates to national assemblies. . . . Executive officers were accountable to legislative assemblies; independent judiciaries ensured that both complied with rules.” Toward the end of the twentieth centu­ry, however, this way of life more or less collapsed, as Robert Put­nam documented in Bowling Alone. We still have voluntary associa­tions, but they are now typically run by salaried professionals, not the members themselves, Appelbaum points out. This is part of the broader shift toward what has been called “managerialism.”

I believe these developments have prepared us to accept a further abstraction of institutional decision-making—as something having little to do with ourselves, and with human judgment as we know it firsthand. Another development that paved the way for our accept­ance of algorithmic governance was the transfer of power out of rep­resentative bodies into administrative agencies. Let us take a glance at this before turning to things digital.

Administration versus Politics

We often hear about a growing “administrative state” (usually from conservative commentators) and are given to understand that it is something we ought to worry about. But why? Doesn’t it consist simply of the stuff all those agencies of the government must do in order to discharge their duties? And speaking of government agen­cies, what are they—part of the executive branch? Yes they are, but a bit of confusion is natural, as this is a hazy area of governance.

Hamburger_featured.jpg

In The Administrative Threat, Columbia law professor Philip Ham­burger writes that, in contrast to executive power proper, “ad­ministrative power involves not just force but legal obligation.” This is an important distinction, and in the blurring of it lies great mis­chief against the Constitution, which invested “the power to bind—that is, to create legal obligation” not in the executive branch but in Congress and in the courts. Administrative power “thereby side­steps most of the Constitution’s procedural freedoms.” It is funda­mentally an “evasion of governance through law.”

Provocatively, Hamburger draws a close parallel with the mecha­nisms of prerogative (such as the notorious Star Chamber) by which King James I of England consolidated “absolute” power—not in­finite power, but power exercised through extralegal means:

Ever tempted to exert more power with less effort, rulers are rarely content to govern merely through the law, and in their restless desire to escape its pathways, many of them try to work through other mechanisms. These other modes of bind­ing subjects are modes of abso­lute power, and once one understands this, it is not altogether surprising that absolute power is a recurring problem and that American administrative power revives it.

The “less effort” bit is just as important for understanding this formula as the “more power” bit. The relevant effort is that of per­suading others, the stuff of democratic politics. The “restless desire to escape” the inconvenience of law is one that progressives are especially prone to, in their aspiration to transform society: merely extant majorities of opinion, and the legislative possibilities that are circumscribed by them, typically inspire not deference but impatience. Conservatives have their own vanguardist enthusiasms that rely on centralized and largely unaccountable power, but in their case this power is generally not located in executive agencies.1

I am not competent to say if Hamburger is right in his characterization of administrative power as extralegal (Harvard law professor Adrian Vermeule says no), but in any case he raises ques­tions that are within the realm of conventional political dispute and within the competence of constitutional scholars to grapple with. By contrast, the rush to install forms of extralegal power not in execu­tive agencies, but in the algorithms that increasingly govern wide swaths of life, pushes the issue of political legitimacy into entirely new territory.

More Power with Less Effort 

“Technology” is a slippery term. We don’t use it to name a toothbrush or a screwdriver, or even things like capacitors and diodes. Rather, use of the word usually indicates that a speaker is referring to a tool or an underlying operation that he does not understand (or does not wish to explain). Essentially, we use it to mean “magic.” In this obscurity lies great opportunity to “exert more power with less effort,” to use Hamburger’s formula.

To grasp the affinities between administrative governance and algorithmic governance, one must first get over that intellectually debilitating article of libertarian faith, namely that “the government” poses the only real threat to liberty. For what does Silicon Valley represent, if not a locus of quasi-governmental power untouched by either the democratic process or by those hard-won procedural liberties that are meant to secure us against abuses by the (actual, elected) government? If the governmental quality of rule by algo­rithms remains obscure to us, that is because we actively welcome it into our own lives under the rubric of convenience, the myth of free services, and ersatz forms of human connection—the new opiates of the masses.

To characterize this as the operation of “the free market” (as its spokespersons do) requires a display of intellectual agility that might be admirable if otherwise employed. The reality is that what has emerged is a new form of monopoly power made possible by the “network effect” of those platforms through which everyone must pass to conduct the business of life. These firms sit at informational bottlenecks, collecting data and then renting it out, whether for the purpose of targeted ads or for modeling the electoral success of a political platform. Mark Zuckerberg has said frankly that “In a lot of ways Facebook is more like a government than a traditional company. . . . We have this large community of people, and more than other technology companies we’re really setting policies.”

It was early innovations that allowed the platform firms to take up their positions. But it is this positioning, and control of the data it allows them to gather, that accounts for the unprecedented rents they are able to collect. If those profits measure anything at all, it is the reach of a metastasizing grid of surveillance and social control. As Pasquale emphasizes, it is this grid’s basic lack of intelligibility that renders it politically unaccountable. Yet political accountability is the very essence of representative government. Whatever this new form of governance might be called, it is certainly not that.

Explainability and Legitimacy

This intelligibility deficit cannot be overcome simply through goodwill, as the logic by which an algorithm reaches its conclusions is usually opaque even to those who wrote the algorithm, due to its sheer complexity. In “machine learning,” an array of variables are fed into deeply layered “neural nets” that simulate the fire/don’t fire synaptic connections of an animal brain. Vast amounts of data are used in a massively iterated (and, in some versions, unsupervised) training regimen. Because the strength of connections between logi­cal nodes within layers and between layers is highly plastic, just like neural pathways, the machine gets trained by trial and error and is able to arrive at something resembling knowledge of the world. That is, it forms associations that correspond to regularities in the world.

As with humans, these correspondences are imperfect. The differ­ence is that human beings are able to give an account of their reason­ing. Now, we need to be careful here. Our cognition emerges from lower-level biological processes that are utterly inaccessible to us. Further, human beings confabulate, reach for rationalizations that obscure more than they reveal, are subject to self-deception, etc. All this we know. But giving an account is an indispensable element of democratic politics. Further, it is not only legislative bodies that observe this scruple.

When a court issues a decision, the judge writes an opinion, typically running to many pages, in which he explains his reasoning. He grounds the decision in law, precedent, common sense, and prin­ciples that he feels obliged to articulate and defend. This is what transforms the decision from mere fiat into something that is polit­ically legitimate, capable of securing the assent of a free people. It constitutes the difference between simple power and authority. One distinguishing feature of a modern, liberal society is that authority is supposed to have this rational quality to it—rather than appealing to, say, a special talent for priestly divination. This is our Enlightenment inheritance.

You see the problem, then. Institutional power that fails to secure its own legitimacy becomes untenable. If that legitimacy cannot be grounded in our shared rationality, based on reasons that can be articulated, interrogated, and defended, it will surely be claimed on some other basis. What this will be is already coming into view, and it bears a striking resemblance to priestly divination: the inscrutable arcana of data science, by which a new clerisy peers into a hidden layer of reality that is revealed only by a self-taught AI program—the logic of which is beyond human knowing.

For the past several years it has been common to hear establishmentarian intellectuals lament “populism” as a rejection of Enlightenment ideals. But we could just as well say that populism is a re-assertion of democracy, and of the Enlightenment principles that underlie it, against priestly authority. Our politics have become at bottom an epistemic quarrel, and it is not all clear to me that the well‑capitalized, institutional voices in this quarrel have the firmer ground to stand on in claiming the mantle of legitimacy—if we want to continue to insist that legitimacy rests on reasonableness and argument.

Alternatively, we may accept technocratic competence as a legiti­mate claim to rule, even if it is inscrutable. But then we are in a position of trust.2 This would be to move away from the originating insight of liberalism: power corrupts. Such a move toward trust seems unlikely, however, given that people are waking up to the business logic that often stands behind the promise of technocratic competence and good will.

Surveillance Capitalism

In her landmark 2019 book, The Age of Surveillance Capitalism, Shoshana Zuboff parses what is really a new form of political economy, bearing little resemblance to capitalism as we have known it. The cynic’s dictum for explaining internet economics—“if you don’t know what the product is, you’re the product”—turns out to be incorrect. What we are, Zuboff shows, is a source of raw material that she calls “behavioral surplus.” Our behavior becomes the basis for a product—predictions about our future behavior—which are then offered on a behavioral futures market. The customers of the platform firms are those who purchase these prediction products, as a means to influence our behavior. The more you know of some-one’s predilections, the more highly elaborated, fine-grained, and successful your efforts to manipulate him will be.  The raw material on which the whole apparatus runs is knowledge acquired through surveillance. Competition for behavioral surplus is such that this surveillance must reach ever deeper, and bring every hidden place into the light.

shoshana+Zuboff+the+age+of.jpg

At this point, it becomes instructive to bring in a theory of the state, and see if it can illuminate what is going on. For this purpose, James C. Scott’s Seeing Like a State, published in 1998, is useful. He traces the development of the modern state as a process of rendering the lives of its inhabitants more “legible.” The premodern state was in many respects blind; it “knew precious little about its subjects, their wealth, their landholdings and yields, their location, their very identity. It lacked anything like a detailed ‘map’ of its terrain and its people.” This lack of a synoptic view put real limits on the aspiration for centralized control; the state’s “interventions were often crude and self-defeating.” And contrariwise, the rise of a more synoptic administrative apparatus made possible various utopian schemes for the wholesale remaking of society.

Google is taking its project for legibility to the streets, quite literally, in its project to build a model city within Toronto in which everything will be surveilled. Sensors will be embedded throughout the physical plant to capture the resident’s activities, then to be mas­saged by cutting-edge data science. The hope, clearly, is to build a deep, proprietary social science. Such a science could lead to real improvements in urban planning, for example, by being able to pre­dict demand for heat and electricity, manage the allocation of traffic capacity, and automate waste disposal. But note that intellectual property rights over the data collected are key to the whole concept; without them there is no business rationale. With those rights secure, “smart cities” are the next trillion-dollar frontier for big tech.

Writing in Tablet, Jacob Siegel points out that “democratic gov­ernments think they can hire out for the basic service they’re sup­posed to provide, effectively subcontracting the day to day functions of running a city and providing municipal services. Well, they’re right, they can, but of course they’ll be advertising why they’re not really necessary and in the long run putting themselves out of a job.” There is real attraction to having the optimizers of Silicon Valley take things over, given the frequent dysfunction of democratic gov­ernment. Quite reasonably, many of us would be willing to give up “some democracy for a bit of benign authoritarianism, if it only made the damn trains run on time. The tradeoff comes in the loss of power over the institutions we have to live inside.” The issue, then, is sovereignty.3

scottseeing.jpg

As Scott points out in Seeing Like a State, the models of society that can be constructed out of data, however synoptic, are necessarily radical simplifications. There is no quantitative model that can capture the multivalent richness of neighborhood life as described by Jane Jacobs, for ex­ample, in her classic urban ethnography of New York. The mischief of grand schemes for progress lies in the fact that, even in the absence of totalitarian aspirations, the logic of metrics and rationalization carries with it an imperative to remake the world, in such a way as to make its thin, formal descriptions true. The gap between the model and reality has to be narrowed. This effort may need to reach quite deep, beyond the arrangement of infrastructure to touch on considerations of political anthropology. The model demands, and helps bring into being, a certain kind of subject. And sure enough, in our nascent era of Big Data social engineering, we see a craze for self-quantification, the voluntary pursuit of a kind of self-legibility that is expressed in the same idiom as technocratic so­cial control and demands the same sort of thin, schematic self-objectifications.4

If we take a long historical view, we have to concede that a regime can be viewed by its citizens as legitimate without being based on democratic representation. Europe’s absolutist monarchies of the sev­enteenth century managed it, for a spell. The regime that is being imagined for us now by venture capital would not be democratic. But it emphatically would be “woke,” if we can extrapolate from the current constellation of forces.

And this brings us to the issue of political correctness. It is a topic we are all fatigued with, I know. But I suggest we try to understand the rise of algorithmic governance in tandem with the rise of woke capital, as there appears to be some symbiotic affinity between them. The least one can say is that, taken together, they provide a good diagnostic lens that can help bring into focus the authoritarian turn of American society, and the increasingly shaky claim of our institu­tions to democratic legitimacy.

The Bureaucratic Logic of Political Correctness

One reason why algorithms have become attractive to elites is that they can be used to install the automated enforcement of cut­ting‑edge social norms. In the last few months there have been some news items along these lines: Zuckerberg assured Congress that Facebook is developing AI that will detect and delete what progressives like to call “hate speech.” You don’t have to be a free speech absolutist to recognize how tendentiously that label often gets ap­plied, and be concerned accordingly. The New York Times ran a story about new software being pitched in Hollywood that will determine if a script has “equitable gender roles.” The author of a forthcoming French book on artificial intelligence, herself an AI researcher, told me that she got a pitch recently from a start-up “whose aim was ‘to report workplace harassment and discrimination without talking to a human.’ They claim to be able to ‘use scientific memory and interview techniques to capture secure records of high­ly emotional events.’”

Presumably a scientifically “secure record” here means a description of some emotionally charged event that is shorn of ambiguity, and thereby tailored to the need of the system for clean inputs. A schematic description of inherently messy experience saves us the difficult, humanizing effort of interpretation and introspection, so we may be relieved to take up the flattened understanding that is offered us by the legibility-mongers.5

Locating the authority of evolving social norms in a computer will serve to provide a sheen of objectivity, such that any reluctance to embrace newly announced norms appears, not as dissent, but as something irrational—as a psychological defect that requires some kind of therapeutic intervention. So the effect will be to gather yet more power to what Michel Foucault called “the minor civil servants of moral orthopedics.” (Note that Harvard University has over fifty Title IX administrators on staff.) And there will be no satisfying this beast, because it isn’t in fact “social norms” that will be enforced (that term suggests something settled and agreed-upon); rather it will be a state of permanent revolution in social norms. Whatever else it is, wokeness is a competitive status game played in the institutions that serve as gatekeepers of the meritocracy. The flanking maneuvers of institutional actors against one another, and the competition among victim groups for relative standing on the intersectional to­tem pole, make the bounds of acceptable opinion highly unstable. This very unsettledness, quite apart from the specific content of the norm of the month, makes for pliable subjects of power: one is not allowed to develop confidence in the rightness of one’s own judg­ments.

To be always off-balance in this way is to be more administratable. A world-renowned historian, a real intellectual giant of the “old Left,” told me that once a year he is required to take an online, multiple-choice test administered by his university’s HR department, which looks for the proper responses to various office situations. It seems clear that there is a symbiotic relationship between administration and political correctness, yet it is difficult to say which is the senior partner in the alliance. The bloated and ever-growing layer of administrators—the deans of inclusion, providers of workshops for student orientation, diversity officers, and what­not—feeds on conflict, using episodes of trouble to start new initia­tives.

But what does any of this have to do with the appeal of algorithms to managers and administrators? If we follow through on the suspicion that, in its black-box quality, “technology” is simply administration by other means, a few observations can be made.

haverl.jpgFirst, in the spirit of Václav Havel we might entertain the idea that the institutional workings of political correctness need to be shrouded in peremptory and opaque administrative mechanisms be­cause its power lies precisely in the gap between what people actu­ally think and what one is expected to say. It is in this gap that one has the experience of humiliation, of staying silent, and that is how power is exercised.

But if we put it this way, what we are really saying is not that PC needs administrative enforcement but rather the reverse: the expand­ing empire of bureaucrats needs PC. The conflicts created by identi­ty politics become occasions to extend administrative authority into previously autonomous domains of activity. This would be to take a more Marxian line, treating PC as “superstructure” that serves main­ly to grease the wheels for the interests of a distinct class—not of capitalists, but of managers.6

The incentive to technologize the whole drama enters thus: managers are answerable (sometimes legally) for the conflict that they also feed on. In a corporate setting, especially, some kind of ass‑covering becomes necessary. Judgments made by an algorithm (ideally one supplied by a third-party vendor) are ones that nobody has to take responsibility for. The more contentious the social and political landscape, the bigger the institutional taste for automated decision-making is likely to be.

Political correctness is a regime of institutionalized insecurity, both moral and material. Seemingly solid careers are subject to sud­den reversal, along with one’s status as a decent person. Contrast such a feeling of being precarious with the educative effect of volun­tary associations and collaborative rule-making, as marveled at by Tocqueville. Americans’ practice of self-government once gave rise to a legitimate pride—the pride of being a grown-up in a free society.  One thing it means to be a grown-up is that you are willing to subordinate your own interests to the common good at crucial junctures. The embrace of artificial intelligence by institutions, as a way of managing social conflict, is likely to further erode that adult spirit of self-government, and contribute to the festering sense that our institutions are illegitimate.

The Ultimate Nudge

Of all the platform firms, Google is singular. Its near-monopoly on search (around 90 percent) puts it in a position to steer thought. And increasingly, it avows the steering of thought as its unique responsibility. Famously founded on the principle “Don’t be evil” (a sort of libertarian moral minimalism), it has since taken up the mission of actively doing good, according to its own lights.

In an important article titled “Google.gov,” law professor Adam J. White details both the personnel flows and deep intellectual affini­ties between Google and the Obama White House. Hundreds of people switched jobs back and forth between this one firm and the administration over eight years. It was a uniquely close relationship, based on a common ethos, that began with Obama’s visit to Goog­le’s headquarters in 2004 and deepened during his presidential cam­paign in 2007. White writes:

Both view society’s challenges today as social-engineering problems, whose resolutions depend mainly on facts and ob­jective reasoning. Both view information as being at once ruth­lessly value-free and yet, when properly grasped, a powerful force for ideological and social reform. And so both aspire to reshape Americans’ informational context, ensuring that we make choices based only upon what they consider the right kind of facts—while denying that there could be any values or politics embedded in the effort.

One of the central tenets of progressives’ self-understanding is that they are pro-fact and pro-science, while their opponents (often the majority) are said to have an unaccountable aversion to these good things: they cling to fond illusions and irrational anxieties. It follows that good governance means giving people informed choices. This is not the same as giving people what they think they want, according to their untutored preferences. Informed choices are the ones that make sense within a well-curated informational setting or context.

When I was a doctoral student in political theory at the University of Chicago in the 1990s, there was worry about a fissure opening up between liberalism and democracy. The hot career track for my cohort was to tackle this problem under the rubric of an intellectual oeuvre called “deliberative democracy.” It wasn’t my thing, but as near as I could tell, the idea (which was taken from the German philosopher Jürgen Habermas) was that if you could just establish the right framing conditions for deliberation, the demos would arrive at acceptably liberal positions. We should be able to formalize these conditions, it was thought. And conversely, wherever the opinions of the demos depart from an axis running roughly from the editorial page of the New York Times to that of the Wall Street Journal, it was taken to be prima facie evidence that there was some distorting influ­ence in the conditions under which people were conducting their thought processes, or their conversations among themselves. The result was opinion that was not authentically democratic (i.e., not liberal). These distortions too needed to be ferreted out and formal­ized. Then you would have yourself a proper theory.

Of course, the goal was not just to have a theory, but to get rid of the distortions. Less tendentiously: protecting the alliance “liberal democracy” required denying that it is an alliance, and propping it up as a conceptual unity. This would require a cadre of subtle dia­lecticians working at a meta-level on the formal conditions of thought, nudging the populace through a cognitive framing operation to be conducted beneath the threshold of explicit argument.

At the time, all this struck me as an exercise in self-delusion by aspiring apparatchiks for whom a frankly elitist posture would have been psychologically untenable. But the theory has proved immensely successful. By that I mean the basic assumptions and aspira­tions it expressed have been institutionalized in elite culture, perhaps nowhere more than at Google, in its capacity as directorate of information. The firm sees itself as “definer and defender of the public interest,” as White says.

One further bit of recent intellectual history is important for understanding the mental universe in which Google arose, and in which progressivism took on the character that it did during the Obama years. The last two decades saw the rise of new currents in the social sciences that emphasize the cognitive incompetence of human beings. The “rational actor” model of human behavior—a simplistic premise that had underwritten the party of the market for the previous half century—was deposed by the more psychologically informed school of behavioral economics, which teaches that we need all the help we can get in the form of external “nudges” and cognitive scaffolding if we are to do the rational thing. But the glee and sheer repetition with which this (needed) revision to our under­standing of the human person has been trumpeted by journalists and popularizers indicates that it has some moral appeal, quite apart from its intellectual merits. Perhaps it is the old Enlightenment thrill at disabusing human beings of their pretensions to specialness, whether as made in the image of God or as “the rational animal.” The effect of this anti-humanism is to make us more receptive to the work of the nudgers.

The whole problem of “liberal democracy”—that unstable hy­brid—is visible in microcosm at Google; it manifests as schizophrenia in how the founders characterize the firm’s mission. Their difficulty is understandable, as the trust people place in Google is based on its original mission of simply answering queries that reflect the extant priorities of a user, when in fact the mission has crept toward a more tutelary role in shaping thought.

google.jpg

Google achieved its dominance in search because of the superior salience of the results it returned compared to its late-1990s rivals. The mathematical insights of its founders played a large role in this. The other key to their success was that they rigorously excluded commercial considerations from influencing the search results. Reading accounts of the firm’s early days, one cannot help being struck by the sincere high-mindedness of the founders. The internet was something pure and beautiful, and would remain so only if guarded against the corrupting influence of advertising.

In other words, there was some truth in the founding myth of the company, namely, the claim that there are no human biases (whether value judgments or commercial interests) at work in the search results it presents to users. Everything is in the hands of neutral algorithms. The neutrality of these algorithms is something the rest of us have to take on trust, because they are secret (as indeed they need to be to protect their integrity against the cat-and-mouse game of “search engine optimization,” by which interested parties can manipulate their rank in the results).

Of course, from the very beginning, the algorithms in question have been written and are constantly adjusted by particular human beings, who assess the aptness of the results they generate according to their own standards of judgment. So the God’s-eye perspective, view-from-nowhere conceit is more ideal than reality. But that ideal plays a legitimating role that has grown in importance as the com­pany has become a commercial behemoth, and developed a powerful interest in steering users of its search engine toward its own ser­vices.7 More importantly, the ideal of neutral objectivity underlies Google’s self-understanding as definer and defender of the public interest.

This is the same conceit of epistemic/moral hauteur that Obama adopted as the lynchpin of his candidacy. The distinctive feature of this rhetoric is that the idea of neutrality or objectivity is deployed for a specific purpose: to assert an identity of interest between liberals and the demos. This identity reveals itself once distortions of objective reality are cleared away. Speaking at Google’s headquarters in 2007 (as characterized by White), Obama said that “as president he wouldn’t allow ‘special interests’ to dominate public discourse, for instance in debates about health care reform, because his administration would respond with ‘data and facts.’” During the Q&A, Obama offered the following:

You know, one of the things that you learn when you’re traveling and running for president is, the American people at their core are a decent people. There’s . . . common sense there, but it’s not tapped. And mainly people—they’re just misinformed, or they’re too busy, they’re trying to get their kids to school, they’re working, they just don’t have enough information, or they’re not professionals at sorting out the infor­mation that’s out there, and so our political process gets skewed. But if you give them good information, their instincts are good and they will make good decisions. And the president has the bully pulpit to give them good information.

. . .  I am a big believer in reason and facts and evidence and feedback—everything that allows you to do what you do, that’s what we should be doing in our government. [Applause.]

I want people in technology, I want innovators and engi­neers and scientists like yourselves, I want you helping us make policy—based on facts! Based on reason!

Lest my point be misunderstood, it is perhaps appropriate to say that I voted for the man in 2008. And since the example he gives above is that of health care, I should say that I am open to the idea that socialized medicine is a good idea, in principle. Further, the debate about health care really was distorted by special interests. The point I wish to make is not about substantive policy positions, but rather to consider the cognitive style of progressive politics as exem­plified by the mutual infatuation of Google and Obama.

Why engage in such an effort now, in the Trump era, when we are faced with such a different set of problems? It is because I believe the appeal of Trump, to fully half the country, was due in significant part to reaction against this peremptory and condescending turn of progressivism.

It is telling that Obama said he would use the president’s bully pulpit not to persuade (the opportunity that “the bully pulpit” has generally been taken to offer), but to “give good information.” We are a people of sound instincts, but “not professionals at sorting out the information that’s out there.” What we need, then, is a professional.

Persuasion is what you try to do if you are engaged in politics. Curating information is what you do if you believe your outlook is one from which dissent can only be due to a failure to properly process the relevant information. This is an anti-political form of politics. If politics is essentially fighting (toward compromise or stalemate, if all goes well, but fighting nonetheless), technocratic rule is essentially helping, as in “the helping professions.” It extends compassion to human beings, based on an awareness of their cogni­tive limitations and their tendency to act out.

google-surveillance.jpg

In the technocratic dream of early twentieth-century Progressives such as Woodrow Wilson, politics was to be overcome through facts and science, clearing the way for rule by experts. Daniel Bell famously named this hoped-for denouement “the end of ideology.” It was a project that acquired a moral mandate as recoil from the ideologically driven cataclysm of World War II. From today’s per­spective, it is striking that twentieth-century Progressives seemed not very conflicted about their cognitive elitism.8 Wilson’s project to transfer power from the legislature to administrative bodies entailed a deliberate transfer of power to “the knowledge class,” as Hamburger writes, or “those persons whose identity or sense of self-worth centers on their knowledge.” This includes “all who are more attached to the authority of knowledge than to the authority of local political communities. . . . [T]heir sense of affinity with cosmopolitan knowledge, rather than local connectedness, has been the foundation of their influence and their identity.”

Today’s progressives have a more complex relationship to their own elitism, surely due in part to the legacy of the civil rights move­ment.9 It takes a certain amount of narrative finesse to maintain a suitably democratic self-understanding while also affirming the role of expertise. This is the predicament of the tech firms, and it is the same one Obama had to manage for himself. In his 2007 remarks at Google, Obama referred to the firm’s origins in a college dorm room, and drew parallels with his own trajectory and aspirations. “What we shared is a belief in changing the world from the bottom up, not the top down; that a bunch of ordinary people can do extraordinary things.” This is the language of a former community organizer. But it is a peculiar sort of “bottom up” that is meant here.

In the Founders Letter that accompanied Google’s 2004 initial public offering, Larry Page and Sergey Brin said their goal is “getting you exactly what you want, even when you aren’t sure what you need.” The perfect search engine would do this “with almost no effort” on the part of the user. In a 2013 update to the Founders Letter, Page said that “the search engine of my dreams provides information without you even having to ask.” Adam J. White glosses these statements: “To say that the perfect search engine is one that mini­mizes the user’s effort is effectively to say that it minimizes the user’s active input. Google’s aim is to provide perfect search results for what users ‘truly’ want—even if the users themselves don’t yet realize what that is. Put another way, the ultimate aspiration is not to answer the user’s questions but the question Google believes she should have asked.” As Eric Schmidt told the Wall Street Journal, “[O]ne idea is that more and more searches are done on your behalf without you having to type. . . . I actually think most people don’t want Google to answer their questions. They want Google to tell them what they should be doing next.”

The ideal being articulated in Mountain View is that we will inte­grate Google’s services into our lives so effortlessly, and the guiding presence of this beneficent entity in our lives will be so pervasive and unobtrusive, that the boundary between self and Google will blur. The firm will provide a kind of mental scaffold for us, guiding our intentions by shaping our informational context. This is to take the idea of trusteeship and install it in the infrastructure of thought.

Populism is the rejection of this.

The American founders were well acquainted with the pathetic trajectories of the ancient democracies, which reliably devolved into faction, oligarchic revolution, and tyranny. They designed our con­stitutional regime to mitigate the worst tendencies of direct democracy by filtering popular passions through political representation, and through nonrepresentative checks on popular will. The more you know of political history, the more impressive the American founding appears.10

Any would-be populist needs to keep this accomplishment in view, as a check on his own attraction to playing the tribune. How much deference is due the demos? I think the decisive question to ask is, what is the intellectual temper of today’s elites? Is it marked by the political sobriety of the founding generation, or an articulated vision of the common good of the nation such as the twentieth-century Progressives offered? Not so much? One can be wary of the demos and still prefer, like William F. Buckley, to be ruled by the first fifty names in the Boston phone book than by one’s fellow intellectuals.

When the internal culture at Google spills out into the headlines, we are offered a glimpse of the moral universe that stands behind the “objective” algorithms. Recall the Googlers’ reaction, which can only be called hysterical, to the internal memo by James Damore. He offered rival explanations, other than sexism, for the relative scarcity of women programmers at the firm (and in tech generally). The memo was written in the language of rational argumentation, and adduced plenty of facts, but the wrong kind. For this to occur within the firm was deeply threatening to its self-understanding as being at once a mere conduit for information and a force for pro­gress. Damore had to be quarantined in the most decisive manner possible. His dissent was viewed not as presenting arguments that must be met, but rather facts that must be morally disqualified.

On one hand, facilitating the free flow of information was Silicon Valley’s original ideal. But on the other hand, the control of information has become indispensable to prosecuting the forward march of history. This, in a nutshell, would seem to be the predicament that the platform firms of Silicon Valley find themselves in. The incoherence of their double mandate accounts for their stumbling, incoherent moves to suppress the kinds of speech that cultural progressives find threatening.11

This conflict is most acute in the United States, where the legal and political tradition protecting free speech is most robust. In Eu­rope, the alliance between social media companies and state actors to root out and punish whatever they deem “hate” (some of which others deem dissent) is currently being formalized. This has become especially urgent ahead of the European Parliament elections sched­uled for May 2019, which various EU figures have characterized as the last chance to quarantine the populist threat. Mounir Mahjoubi, France’s secretary of state for digital affairs, explained in February 2019 that, by the time of the election, “it will be possible to formally file a complaint online for hateful content.”12 In particular, Twitter and Facebook have agreed to immediately transmit the IP addresses of those denounced for such behavior to a special cell of the French police “so that the individual may be rapidly identified, rapidly prosecuted and sentenced.” He did not explain how “hateful con­tent” is to be defined, or who gets to do it. But it is reported that Facebook has a private army of fifteen to twenty thousand for this task.

Speaking at MIT six months after the Damore memo episode, in February 2018, former president Obama addressed head-on the problem of deliberative democracy in an internet culture. He made the familiar point about a “balkanization of our public conversation” and an attendant fragmenting of the nation, accelerated by the internet. “Essentially we now have entirely different realities that are being created, with not just different opinions but now different facts—different sources, different people who are considered author­itative.” As Obama noted, “it is very difficult to figure out how democracy works over the long term in those circumstances.” We need “a common baseline of facts and information.” He urged his tech audience to consider “what are the . . . algorithms, the mechanisms whereby we can create more of a common conversation.”

obama-big-brother.jpg

Obama’s description of the problem seems to me apt. Our political strife has become thoroughly epistemic in nature. But the fix he recommends—using algorithms to “create more of a common conversation”—is guaranteed to further inflame the sense any dissi­dent-minded person has that the information ecosystem is “rigged,” to use one of President Trump’s favorite words. If we are going to disqualify voices in a way that is not explainable, and instead de­mand trust in the priestly tenders of the algorithms, what we will get is a politics of anticlericalism, as in the French Revolution. That was not a happy time.

Among those ensconced in powerful institutions, the view seems to be that the breakdown of trust in establishment voices is caused by the proliferation of unauthorized voices on the internet. But the causal arrow surely goes the other way as well: our highly fragmented casting-about for alternative narratives that can make better sense of the world as we experience it is a response to the felt lack of fit between experience and what we are offered by the official organs, and a corollary lack of trust in them. For progressives to now seek to police discourse from behind an algorithm is to double down on the political epistemology that has gotten us to this point. The algorithm’s role is to preserve the appearance of liberal proceduralism, that austerely fair-minded ideal, the spirit of which is long dead.

Such a project reveals a lack of confidence in one’s arguments—or a conviction about the impotence of argument in politics, due to the irrationality of one’s opponents. In that case we have a simple contest for power, to be won and held onto by whatever means necessary.

This article originally appeared in American Affairs Volume III, Number 2 (Summer 2019): 73–94.


Notes

1   I am thinking of the project to transform society in the image of an imagined “free market” that would finally bring to fruition what nature prescribes (often destroying customary practices and allegiances along the way), or stories of a novus ordo seclorum that will blossom if only we are courageous enough to sweep away the impediments to “freedom” (man’s natural estate)—by aerial bombardment, if necessary. Both enthusiasms tend toward lawlessness. Both seek a world that is easily conjured in the idiom of freedom-talk, but in practice require more thorough submission to mega-bureaucracies, whether of corporations or of an occupation force.

 

2   In his 2018 book The Revolt of the Public, Martin Gurri diagnoses the eruption of protest movements around the world in 2011—Occupy Wall Street, the indignados in Spain, and the violent street protests in London, to name a few—as a politics of pure negation, driven more by the romance of denunciation than by any positive program. These protests expressed distrust of institutional voices, and a wholesale collapse of social authority. On left and right alike, people feel the system is rigged, and indeed political leaders themselves have stoked this conviction, insisting that elections lost by their side were illegitimate, whether because of “voter suppression” or a phantom epidemic of voting by illegal immigrants. This is dangerous stuff.

3   It is therefore perhaps appropriate to consider Europe’s experiment in transferring political sovereignty to a technocratic, democratically unaccountable governing body (the European Union) for clues as to what sort of political reactions such a project might engender.

4   Richard Rorty celebrated the power of “redescription” to alter our moral outlook. He had in mind the genuinely liberal effect that literature sometimes has in shifting our gaze, for example the effect that reading Dickens or Uncle Tom’s Cabin had in enlarging the sympathies of people in the nineteenth century. But redescriptions can just as easily have an impoverishing effect on how we view ourselves and the world. As Iris Murdoch wrote, man is the animal who makes pictures of himself, and then comes to resemble the pictures.

5   On campus, something like this is evident in the reduction of the entire miasma of teenage sexual incompetence, with its misplaced hopes and callow cruelties, to the legal concept of consent. Under the influence of this reduction, a young person is left with no other vocabulary for articulating her unhappiness and confusion over a sexual encounter. She must not have really consented.

6   In a related materialist vein, Reihan Salam ties “wokeness” to the political economy of precarity. Wokeness is a competitive status game played by aspirants to cultural capital. As one friend put it to me, “my PC status advancement seems to depend on my having a Perry Mason moment, in which I reveal that the defendant’s superficially unobjectionable speech actually hides a subtext of oppression only apparent to me.”

7   Whenever there is a whiff of regulatory concern about possible self-dealing by Google as it expands into services far afield from search, it asserts that if its search results presented anything but the  disinterested, best possible match to what “customers” were looking for, they would go elsewhere. This incantation of the free market syllogism works like a magic spell in arresting criticism. But how does the logic of the market apply to a near-monopoly? Or to a firm that provides a free service? The reality, of course, is that the users of Google are not the customers. Customers are those who pay for a product. Advertisers gave Google $95 billion in 2017 for access to the product. The product consists of predictions about users’ susceptibility to a specific pitch.

8   Another difference stands out. The early Progressive program had avowedly nationalist elements (consider William James’s famous essay “The Moral Equivalent of War” promoting a program of universal national service and an ethic of hardness, to be achieved through manual labor). It sought to forge an American identity based on solidarity, whereas for today’s progressives the idea of a common good, defined as American, is more problematic. A version of it underlies the economic populism of Bernie Sanders and Elizabeth Warren, but the idea of a specifically American common good is also subject to constant challenge on the left, both by immigration maximalists and by intersectional entrepreneurs who must direct their claims against what is common. Relatedly, the perspective of today’s progressivism is global rather than national, and this sits easier with the “shareholder cosmopolitanism” of capital.

9   Woodrow Wilson complained that “the bulk of mankind is rigidly unphilosophical, and nowadays the bulk of mankind votes.” The reformer is bewildered by the need to influence “the mind, not of Americans of the older stocks only, but also of Irishmen, of Germans, of Negroes.” Hamburger argues that it was the expansion of voting rights early in the twentieth century that prompted Wilson to want to shift power from representative bodies to executive agencies.

10  Barbara Tuchman wrote, “Not before or since has so much careful and reasonable thinking been invested in the formation of a government system. . . .  [T]he Founders remain a phenomenon to keep in mind to encourage our estimate of human possibilities, even if their example is too rare to be a basis of normal expectations” (The March of Folly: From Troy to Vietnam, 1984).

11  Silicon Valley’s original intellectual affiliations lie deep in the California counterculture: the Human Potential Movement, Esalen, the Whole Earth Catalog, and all that. Like the generation of ’68 in its long march through the institutions, the Valley staked its early identity on an emancipatory mission, and it has had a hard time adjusting its self-image to reflect its sheer power. In the same vein, recall Obama striking the pose of community organizer in his second term, speaking truth to power from Air Force One.

12  “Mounir Mahjoubi : ‘Dans quelques mois, il sera possible de porter plainte sur internet pour contenus haineux,’” Fdesouche.com, February 20, 2019.

jeudi, 10 mars 2016

Comment le monde actuel a privatisé le silence

mbc-image.jpg

Comment le monde actuel a privatisé le silence

Ex: http://www.oragesdacier.info

Les technologies modernes nous sollicitent de plus en plus, et chacun semble s’en réjouir. Or, cela épuise notre faculté de penser et d’agir, estime le philosophe-mécano Matthew B. Crawford.

« Tout le malheur des hommes vient d'une seule chose, qui est de ne pas savoir demeurer en repos dans une chambre », écrivait déjà Pascal en son temps. Mais que dirait l'auteur des Pensées aujourd'hui, face à nos pauvres esprits sursaturés de stimulus technologiques, confrontés à une explosion de choix et pour lesquels préserver un minimum de concentration s'avère un harassant défi quotidien ? C'est cette crise de l'attention qu'un autre philosophe, cette fois contemporain, s'est attelé à décortiquer.

Matthew B. Crawford est américain, chercheur en philosophie à l'université de Virginie. Il a la particularité d'être également réparateur de motos. De ce parcours de « philosophe mécano », il a tiré un premier livre, Eloge du carburateur. Essai sur le sens et la valeur du travail, best-seller aux Etats-Unis. Il y raconte comment, directeur d'un think tank de Washington où il lui était demandé de résumer vingt-trois très longs articles par jour — « un objectif absurde et impossible, l'idée étant qu'il faut écrire sans comprendre, car comprendre prend trop de temps... » —, il en a claqué la porte pour ouvrir un garage de réparation de motos. Dans ce plaidoyer en faveur du travail manuel, il célèbre la grandeur du « faire », qui éduque et permet d'être en prise directe avec le monde par le biais des objets matériels.

“Notre espace public est colonisé par des technologies qui visent à capter notre attention.”

C'est en assurant la promotion de son best-seller que Crawford a été frappé par ce qu'il appelle « une nouvelle frontière du capitalisme ». « J'ai passé une grande partie de mon temps en voyage, dans les salles d'attente d'aéroports, et j'ai été frappé de voir combien notre espace public est colonisé par des technologies qui visent à capter notre attention. Dans les aéroports, il y a des écrans de pub partout, des haut-parleurs crachent de la musique en permanence. Même les plateaux gris sur lesquels le voyageur doit placer son bagage à main pour passer aux rayons X sont désormais recouverts de publicités... »
 
mbc-world.jpgLe voyageur en classe affaires dispose d'une échappatoire : il peut se réfugier dans les salons privés qui lui sont réservés. « On y propose de jouir du silence comme d'un produit de luxe. Dans le salon "affaires" de Charles-de-Gaulle, pas de télévision, pas de publicité sur les murs, alors que dans le reste de l'aéroport règne la cacophonie habituelle. Il m'est venu cette terrifiante image d'un monde divisé en deux : d'un côté, ceux qui ont droit au silence et à la concentration, qui créent et bénéficient de la reconnaissance de leurs métiers ; de l'autre, ceux qui sont condamnés au bruit et subissent, sans en avoir conscience, les créations publicitaires inventées par ceux-là mêmes qui ont bénéficié du silence... On a beaucoup parlé du déclin de la classe moyenne au cours des dernières décennies ; la concentration croissante de la richesse aux mains d'une élite toujours plus exclusive a sans doute quelque chose à voir avec notre tolérance à l'égard de l'exploitation de plus en plus agressive de nos ressources attentionnelles collectives. »
 
“L’autorégulation est comme un muscle, il s’épuise facilement.”
 
Bref, il en va du monde comme des aéroports : nous avons laissé transformer notre attention en marchandise, ou en « temps de cerveau humain disponible », pour reprendre la formule de Patrick Le Lay, ex-PDG de TF1 ; il nous faut désormais payer pour la retrouver. On peut certes batailler, grâce à une autodiscipline de fer, pour résister à la fragmentation mentale causée par le « multitâche ». Résister par exemple devant notre désir d'aller consulter une énième fois notre boîte mail, notre fil Instagram, tout en écoutant de la musique sur Spotify et en écrivant cet article... « Mais l'autorégulation est comme un muscle, prévient Crawford. Et ce muscle s'épuise facilement. Il est impossible de le solliciter en permanence. L'autodiscipline, comme l'attention, est une ressource dont nous ne disposons qu'en quantité finie. C'est pourquoi nombre d'entre nous se sentent épuisés mentalement. »
 
Illustrations Tom Haugomat pour Télérama
Cela ressemble à une critique classique de l'asservissement moderne par la technologie alliée à la logique marchande. Sauf que Matthew Crawford choisit une autre lecture, bien plus provocatrice. L'épuisement provoqué par le papillonnage moderne, explique-t-il, n'est pas que le résultat de la technologie. Il témoigne d'une crise des valeurs, qui puise ses sources dans notre identité d'individu moderne. Et s'enracine dans les aspirations les plus nobles, les plus raisonnables de l'âge des Lumières. La faute à Descartes, Locke et Kant, qui ont voulu faire de nous des sujets autonomes, capables de nous libérer de l'autorité des autres — il fallait se libérer de l'action manipulatrice des rois et des prêtres. « Ils ont théorisé la personne humaine comme une entité isolée, explique Crawford, totalement indépendante par rapport au monde qui l'entoure. Et aspirant à une forme de responsabilité individuelle radicale. »
 
mbc-carbu.jpgC'était, concède tout de même le philosophe dans sa relecture (radicale, elle aussi) des Lumières, une étape nécessaire, pour se libérer des entraves imposées par des autorités qui, comme disait Kant, maintenaient l'être humain dans un état de « minorité ». Mais les temps ont changé. « La cause actuelle de notre malaise, ce sont les illusions engendrées par un projet d'émancipation qui a fini par dégénérer, celui des Lumières précisément. » Obsédés par cet idéal d'autonomie que nous avons mis au coeur de nos vies, politiques, économiques, technologiques, nous sommes allés trop loin. Nous voilà enchaînés à notre volonté d'émancipation.

“Cette multiplication des choix capte toujours plus notre énergie et notre attention...”
 
« Nous pensons souvent que la liberté équivaut à la capacité à faire des choix ; maximiser cette liberté nécessiterait donc de maximiser toujours plus le nombre de possibilités qui s'offrent à nous, explique Crawford. Alors que c'est précisément cette multiplication qui capte toujours plus notre énergie et notre attention... » Un processus pervers dont nous souffrons autant que nous jouissons, en victimes consentantes. En acceptant de nous laisser distraire par nos smartphones, nous nous épuisons mentalement... tout en affirmant notre plaisir d'être libres et autonomes en toutes circonstances. Vérifier ses e-mails en faisant la queue au cinéma, au feu rouge ou en discutant avec son voisin, c'est clamer sa liberté toute-puissante, face à l'obligation qui nous est faite d'attendre. C'est être « designer » de son monde, comme le répètent à l'envi les forces du marketing.
 
Et c'est s'enfermer, dénonce le philosophe, dans l'idéal autarcique d'un « moi sans attaches qui agit en toute liberté », rationnellement et radicalement responsable de son propre sort. Dans un sens, nous sommes peut-être tous en train de devenir autistes, en cherchant à nous créer une bulle individuelle où il nous serait, enfin, possible de nous recentrer... Bien sûr, faire de Descartes et Kant les seuls responsables de cette captation de l'attention, c'est pousser le bouchon très loin. Mais c'est aussi écrire une philosophie « sur un mode vraiment politique, revendique Crawford, c'est-à-dire polémique, comme le faisaient les penseurs des Lumières que je critique, en réponse à tel ou tel malaise ressenti de façon aiguë à un moment historique donné ». Ce faisant, le philosophe offre une vision alternative, et même quelques clés thérapeutiques, pour reprendre le contrôle sur nos esprits distraits. Pas question pour lui de jeter tablettes et smartphones — ce serait illusoire. Ni de s'en remettre au seul travail « sur soi ».
 
« L'effet combiné de ces efforts d'émancipation et de dérégulation, par les partis de gauche comme de droite, a été d'augmenter le fardeau qui pèse sur l'individu désormais voué à s'autoréguler, constate-t-il. Il suffit de jeter un œil au rayon "développement personnel" d'une librairie : le personnage central du grand récit contemporain est un être soumis à l'impératif de choisir ce qu'il veut être et de mettre en oeuvre cette transformation grâce à sa volonté. Sauf qu'apparemment l'individu contemporain ne s'en sort pas très bien sur ce front, si l'on en juge par des indicateurs comme les taux d'obésité, d'endettement, de divorce, d'addictions y compris technologiques... »
 
illustrations Tom Haugomat pour Télérama
Matthew Crawford préfère, en bon réparateur de motos, appeler à remettre les mains dans le cambouis. Autrement dit à « s'investir dans une activité qui structure notre attention et nous oblige à "sortir" de nous. Le travail manuel, artisanal par exemple, l'apprentissage d'un instrument de musique ou d'une langue étrangère, la pratique du surf [NDLR : Crawford est aussi surfeur] nous contraignent par la concentration que ces activités imposent, par leurs règles internes. Ils nous confrontent aux obstacles et aux frustrations du réel. Ils nous rappellent que nous sommes des êtres "situés", constitués par notre environnement, et que c'est précisément ce qui nous nous permet d'agir et de nous épanouir ». Bref, il s'agit de mettre en place une « écologie de l'attention » qui permette d'aller à la rencontre du monde, tel qu'il est, et de redevenir attentif à soi et aux autres — un véritable antidote au narcissisme et à l'autisme.
 
“Le monde ­actuel privatise le silence qui rend possible l'attention et la concentration”
 
Est-ce aussi un appel à mettre plus de zen ou de « pleine conscience » dans nos vies, comme le faisait déjà un autre auteur-réparateur de motos, l'Américain ­Robert Pirsig dans un roman devenu culte, le Traité du zen et de l'entretien des motocyclettes ? Non, rétorque Crawford, car l'enjeu n'est pas qu'individuel. Il est foncièrement politique. « L'attention, bien sûr, est la chose la plus personnelle qui soit : en temps normal, nous sommes responsables de notre ­aptitude à la concentration, et c'est nous qui choisissons ce à quoi nous souhaitons prêter attention. Mais l'attention est aussi une ressource, comme l'air que nous respirons, ou l'eau que nous ­buvons. Leur disponibilité généralisée est au fondement de toutes nos activités. De même, le silence, qui rend possible l'attention et la concentration, est ce qui nous permet de penser. Or le monde ­actuel privatise cette ressource, ou la confisque. » La solution ? Faire de l'attention, et du silence, des biens communs. Et revendiquer le droit à « ne pas être interpellé »...
 
Matthew Crawford
Chercheur en philosophie à l'université de Virginie et réparateur de motos 
2010 : Eloge du carburateur. Essai sur le sens et la valeur du travail. Ed. La Découverte