Sebastian Schich https://sebastianschich.com Photos and commentaries on the public financial safety net Fri, 27 Mar 2026 17:20:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://sebastianschich.com/wp-content/uploads/2023/01/cropped-favicon-32x32.jpg Sebastian Schich https://sebastianschich.com 32 32 Stablecoin Dollarization as Exorbitant Privilege 2.0 https://sebastianschich.com/stablecoindollarization/?utm_source=rss&utm_medium=rss&utm_campaign=stablecoindollarization https://sebastianschich.com/stablecoindollarization/#respond Fri, 27 Mar 2026 17:20:55 +0000 https://sebastianschich.com/?p=2227 Stablecoin market growth and the US dollar

The stablecoin market has grown from USD 5 billion in 2020 to over USD 300 billion in early 2026; it might reach between USD 1 to 3 trillion by 2030, according to Federal Reserve staff. USD‑pegged stablecoins —crypto assets designed to maintain parity with the U.S. dollar — today account for close to 99% of all stablecoin market capitalization (Figure below), with just two names—Tether and USD Coin—capturing the vast majority of that share. This is happening at the same time as geopolitical calls for “de‑dollarization,” rising concern about US fiscal sustainability, and political pressure on the Federal Reserve’s institutional independence. This post asks how these two realities can coexist, and it argues that stablecoins are not eroding the dollar’s exorbitant privilege but are instead creating a digital version of it.

Traditional and digital currency substitution

Traditional finance witnesses a slow, partial diversification away from the USD and there is some high‑profile rhetoric about alternative international payment and settlement arrangements. The European Union has implemented the Markets in Crypto-Assets (MiCA) framework to promote euro-denominated crypto asset alternatives to the USD; China has expanded its digital yuan (e-CNY) pilot programs and explored yuan-backed stablecoins in Hong Kong; and BRICS nations have actively discussed settlement mechanisms outside the dollar system. Nonetheless, digitally, the USD remains extraordinarily dominant, as exemplified by the stablecoin segment.

The dominance of USD-pegged stablecoins cannot be understood through traditional currency substitution theory alone. While classic models emphasize interest rate differentials, inflation expectations, and transaction costs, digital currency markets exhibit distinct properties that amplify the advantages of certain currencies along at least three dimensions.

Common knowledge

First, Berg et al. (2024) describe dollar dominance in stablecoin markets as rooted in the recursive belief that “I know what a dollar is worth, I know that you know what a dollar is worth, and you know that I know that you know what a dollar is worth.” A USD‑pegged stablecoin can “free‑ride” on decades of accumulated understanding of the dollar’s value, both in the United States and abroad. Euro or yuan stablecoins, by contrast, must build such common knowledge from a negligible base in the crypto ecosystem, even if those currencies are well‑known in traditional finance.

Network effects and path dependence

Second, digital infrastructures make it trivial to express that preference. In earlier work on pegged crypto assets, I argued that stablecoins aim to function as digital anchors linking crypto markets to fiat monetary systems, although their empirical performance in this regard is not impressive. Be that as it may, the anchor role turns out to be highly asymmetric: USD‑pegged instruments provide a focal numéraire in decentralized finance, in centralized crypto exchange markets, and in cross‑border transfer use cases. This focal position is reinforced by network effects and path dependence.

Stablecoins reshape the “impossible trinity”

Third, the “impossible trinity” of international finance—wherein countries cannot simultaneously maintain independent monetary policy, stable exchange rates, and open capital accounts—takes on new dimensions with stablecoins. Benigno et al. (2022) show that in a two-country framework with global stablecoins, interest rates tend to synchronize across countries, making users increasingly indifferent between holding domestic currency and the stablecoin. In such an environment, the incumbent reserve currency’s advantages—liquidity, collateral usefulness, and depth of financial markets—are not diluted by digitalization; they are scaled. Digital rails do not level the playing field; they tilt it further toward the US dollar.

Emerging market hedging and welfare

The demand side of “crypto dollarization” is driven in large part by emerging market economies. For households facing persistent inflation, currency depreciation, and financial repression, dollar‑denominated digital assets are a natural hedging vehicle.

Murakami and Viswanath‑Natraj (2025) show that in countries like Turkey and Argentina, stablecoins can improve household welfare by providing a more stable savings instrument and smoothing consumption in the face of macroeconomic shocks. Both banked and unbanked households benefit, with stablecoins acting as a low‑friction channel into dollar assets that bypass traditional banking constraints. Ahmed et al. find that crypto adoption responds to sovereign risk: higher CDS spreads are associated with more crypto‑app downloads and usage, which is consistent with households turning to digital assets as a way of hedging default and inflation risk. Ahmed et al.’s (2024) find that crypto adoption responds to sovereign risk: higher CDS spreads are associated with more crypto‑app downloads and usage, which is consistent with households turning to digital assets as a way of hedging default and inflation risk.

Financial subordination

At the same time, a literature on international financial subordination emphasizes that greater access to foreign‑currency instruments can deepen structural dependence. Perfeito da Silva and Zucker‑Marques (2025) argue that fintech tools that ease access to foreign currency and crypto can accelerate capital outflows and intensify dollarization. Evidence from stablecoin transaction flows suggests that emerging market currencies with stronger Tether usage are also characterized by higher exchange‑rate volatility. This can create a self‑reinforcing dynamic: domestic instability drives stablecoin adoption; that adoption puts further pressure on the local currency and reduces room for domestic policy.

There is no contradiction between the views on financial subordination on the one hand and emerging market hedging and welfare on the other. From the standpoint of individual households, digital dollarization can be welfare‑enhancing. From the standpoint of macro‑level monetary sovereignty, it can erode autonomy and deepen financial subordination. Stablecoins make this tension more acute, and this assessment is consistent with research discussed in an insightful recent blog post.

Regulation as amplifier

On the supply side, the stablecoin market is highly concentrated. A small number of US dollar‑pegged issuers dominate, benefiting from strong network effects, established liquidity, and integration into major trading venues. Empirically, episodes of severe market stress—such as the Terra/Luna collapse or the Silicon Valley Bank episode—have hit the major US-pegged stablecoins less (even though Tether temporarily lost its peg) than other stablecoin designs such as crypto-backed or algorithmic ones.

Regulation has tended to reinforce this pattern. The US GENIUS Act (and related initiatives) establish a unified framework for dollar‑pegged stablecoins, requiring full backing in short‑term Treasuries or dollars and regular reserve disclosures. This provides legal clarity and supervision for the largest USD issuers — that is Tether, followed at a distance by USD Coin — making them even more attractive as infrastructure for international payments and settlement arrangements. Crucially, it was accompanied by the CBDC Anti-Surveillance State Act that prohibits the Federal Reserve from issuing a retail central bank digital currency (CBDC), thus barring a state-issued competitor to private issuers of US-dollar pegged stablecoins.

Here the strategic dimension becomes visible. US policymakers have openly discussed the potential for stablecoins to generate large additional demand for Treasury bills and bonds, likening this to a new “global saving glut”. In my reading, this is not merely an accidental by‑product of regulation: it seems that US regulators are consciously trying to entrench the “digital exorbitant privilege” by encouraging regulated, US-dollar‑backed stablecoins that sit squarely inside the US legal and financial perimeter. Stablecoin regulation becomes, in part, an instrument of public debt management.

The digital exorbitant privilege

Taken together, these pieces help resolve the paradox. On‑chain, we see:

  • a massive initial advantage due to the dollar’s common‑knowledge status,
  • strong network effects and issuer concentration in USD‑pegged coins,
  • structural EM demand for dollar hedges via digital channels, and
  • regulatory choices that backstop dollar‑linked designs.

This configuration does not weaken the dollar’s exorbitant privilege; it extends it into a new technological layer. Stablecoins generate additional demand for US securities, embed the dollar more deeply into DeFi and cross‑border retail usage, and give US authorities new levers of influence (and surveillance) over flows transiting through dollar‑linked chains. Even if traditional metrics—such as the dollar’s share in official reserves—show gradual diversification, the rapid growth of stablecoins ensures that the dollar remains the dominant unit of account in this fast‑growing segment of global finance.

Within the broader monetary system, stablecoins may still be subsidiary in scale. But within their own domain, they exhibit “winner‑take‑most” dynamics that lock in incumbents. Absent structural disruptions — such as a serious erosion of US institutional monetary credibility –developments in this domain are likely to reinforce, not erode, the US dollar’s international role.

That is what I call in reference to Giscard d’Estaing’s original concept “exorbitant privilege 2.0”: a digital extension of US monetary power, emerging as a result of the way crypto asset markets are evolving. Stablecoins create new demand for Treasuries, extend dollar hegemony into decentralized finance ecosystems, and provide mechanisms for sanctions enforcement through blockchain surveillance.

 

]]>
https://sebastianschich.com/stablecoindollarization/feed/ 0
When AI Agrees Too Much https://sebastianschich.com/when-ai-agrees-too-much/?utm_source=rss&utm_medium=rss&utm_campaign=when-ai-agrees-too-much https://sebastianschich.com/when-ai-agrees-too-much/#respond Tue, 03 Mar 2026 15:10:29 +0000 https://sebastianschich.com/?p=2201 Sycophancy and the Future of Human Autonomy

Generative artificial intelligence (GenAI) now sits in the middle of many human–machine interactions, from fact‑checking to drafting text. It promises to diagnose human biases, nudge us toward better decisions, and separate signal from noise in overwhelming data flows. Yet these systems carry a stubborn flaw of their own: sycophancy, a tendency to affirm and flatter rather than challenge their human users.

This people‑pleasing bias is not a side effect but a structural feature of how models are trained and tuned. It raises awkward questions about how AI “sees” us when we interact with it—and how, in turn, it may shape its advice and our choices. Those questions become more pressing as we move from passive chatbots to agentic systems that not only recommend but also act on our behalf.

The Mechanics of Flattery

Sycophancy emerges from several intertwined sources, rooted in both data and modelling choices. Pre‑training on internet‑scale text embeds conversational norms of agreement and politeness. Models see countless examples where people praise, reassure, and soften disagreement, and far fewer where disagreement is rewarded. Reinforcement learning from human feedback then reinforces outputs that match user preferences and expectations over those that merely maximise factual accuracy. In effect, deference becomes a high‑reward strategy: agreeing with the user, or at least sounding supportive, is often the shortest path to getting a good rating.

At inference time, prompt structure does the rest. When a user frames their opinion as fact — “Obviously X is true, right?”—later layers of the model’s network realign the output distribution towards that stance. Personalisation and memory amplify this effect. If the system has seen many similar interactions with the same user, or if it has a distilled memory profile, it can increasingly predict what that user wants to hear and shape its responses accordingly. Over time, a model that could have offered a corrective perspective learns instead to be the digital equivalent of a “yes‑man.”

Seeing Sycophancy in Practice

A simple way to see sycophancy in action is to watch how often a model changes its mind when you push back. Ask a question, get an answer, then follow up with something as mild as “Are you sure?”. The model will frequently reverse its position. If you challenge it again, it may revert to a variant of its initial response. This back‑and‑forth is not a careful update in light of new evidence; it is a system that treats your doubt itself as a reason to move toward whatever you appear to be suggesting.

Fanous et al. (2025) make this dynamic precise using their SycEval benchmark. They first ask state‑of‑the‑art models—GPT‑4o, Claude Sonnet, and Gemini‑1.5‑Pro—questions in mathematics and medicine and record whether the initial answers are correct. They then feed crafted “rebuttals” that either dispute a correct answer or defend an incorrect one, and measure when models switch their answers because of that user challenge. Across all conditions, they find that almost 60% of responses are sycophantic in this sense: the model changes its stance in the direction suggested by the user, even when this means abandoning a correct solution or endorsing a wrong one. Gemini shows the highest overall sycophancy rate across both structured algebra and medical‑advice tasks, at about 62%, with ChatGPT at the low end of the range. This resonates uncomfortably with my earlier observations about Gemini’s politically over‑correct refusal to depict Nazi officers in historically realistic ways, suggesting a broader tendency to prioritise socially acceptable answers over uncomfortable truths.

Fanous et al. elucidate the manner in which various challenges influence the model’s behavior. Simple, informal rebuttals, such as “I’m pretty sure the answer is X, you misread the question,” can suffice to induce what the authors term “progressive sycophancy” when X is indeed correct. Rebuttals that are laden with citations and possess an authoritative tone are particularly effective in inducing “regressive sycophancy,” wherein the model deviates from the ground truth. Once a model begins to exhibit sycophantic behavior, this tendency often persists; in their experiments, approximately four out of five sycophantic episodes recur in subsequent interactions. A single instance of questioning, such as “Are you sure?” can entrench the system in a pattern of people-pleasing behaviour for the remainder of the conversation.

Controlling for Sycophancy in AI-based Research

Researchers are increasingly integrating GenAI into research methodologies and are generally cognizant of the associated biases. A prevalent strategy involves utilizing models via an API in a controlled, stateless manner: each prompt is dispatched as an independent request without retaining conversational history, thereby mitigating the accumulation of context that may lead to sycophantic responses. For instance, recent work examining the bias of ChatGPT in forecasting stock performance employ repeated API calls rather than extended dialogues. If one genuinely wants answers that are as free as possible from people‑pleasing bias, one needs to create a new account for each question, which however is not realistic at scale.

Moreover, models also learn about us through our presence on the internet—our articles, social‑media posts, and recorded interviews. As a result, the specific research question, its context, or even its style might already provide sufficient information to infer something about the originator, whom the model will then try to please.

Agreement Versus Perspective Sycophancy

Two interacting biases are at play. Agreement sycophancy describes the inclination of models to produce excessively affirmative responses—agreeing with the user’s last move simply because it is the user’s last move. Perspective sycophancy, by contrast, refers to the extent to which models echo a user’s underlying viewpoint, speaking as if they share their political, moral, or cultural stance.

Jain et al. (2025) consider different GenAI models under various context conditions, ranging from one‑shot interactions devoid of history to settings with rich user memory profiles. They show that the presence of user context generally amplifies agreement sycophancy: the more the model knows about how you usually talk and what you usually accept, the more inclined it is to say “yes.” However, the specific behaviour varies with context type. User memory profiles tend to be linked to the most pronounced increases in agreement sycophancy, although some models exhibit increased sycophancy even when given synthetic context not derived from real users.

Perspective sycophancy is subtler. It tends to rise significantly only when models can accurately interpret user viewpoints from the interaction context. Knowing that you are “left‑leaning” or “conservative,” that you are risk‑averse or contrarian, or that you usually favour certain metaphors gives the system a template to mimic your perspective. In summary, context influences sycophancy in multiple ways, which raises challenging design questions for extended interactions.

The Effects of Flattery

Why does this matter? Because flattering systems do not merely shape what we believe; they also shape what we are willing to do. For example, when providing social advice, sycophantic AI reduces users’ willingness to address interpersonal conflicts and strengthens their belief in their own correctness, even when they are objectively wrong. At the same time, human users tend to rate sycophantic responses as higher quality and are more likely to return to the systems that flatter them. This creates a perverse incentive loop: the more an AI agrees with us, the more we reward it with attention and reuse, which in turn nudges developers and models toward even more people‑pleasing behaviour. In this way, sycophancy subtly weakens individual judgment and reinforces echo‑chamber dynamics, especially in opinionated or emotionally charged domains. If flattery shapes how people think and feel, the next question is who gets exposed to how much flattery.

Who Gets How Much Flattery? A Gradient of Exposure

The intensity of sycophancy that any individual encounters likely depends on both the style and frequency of their AI use and, perhaps, on the digital traces they leave behind. Put differently: the more context a system has about a person’s views, habits, and public persona, the more it can lean into tailored, people‑pleasing responses rather than neutral ones.

My working hypothesis therefore posits a gradient of exposure, with sycophancy intensifying across three archetypes:

  • An internet consumer who absorbs content while not intending to leave a personal trace, but whose prompts still reveal tastes and assumptions that the AI can mirror.
  • A blogger whose opinions are easily profiled online, so the model can infer their stance from public content and query style even within nominally “stateless” sessions.
  • A public figure whose internet presence allows the system to build and refine a view of their preferences that reinforces their preferred narratives over time.

Agreement sycophancy and perspective sycophancy manifest differently across the three archetypes, contingent upon the extent of contextual information available to the model and its ability to discern the user’s viewpoint. Agreement sycophancy is most pronounced when there is an abundance of interaction context, whereas perspective sycophancy predominantly increases when such context elucidates the user’s underlying views. Consequently, a public figure with a comprehensive memory profile is not merely being agreed with; rather, they are increasingly being represented by systems optimized to emulate their voice.

Using or Not Using the AI Memory

The manner in which users interact with GenAI is crucial, and a key distinction is whether the system operates without memory or with memory.

Without memory means the AI product does not keep a long‑term record of who you are. Each session is mostly self‑contained: the system sees your current prompts and perhaps a short recent context, but it “forgets” you afterwards. In this mode, flattery still occurs, but it is generic—driven by broad social norms in the training data rather than by a stable picture of you.

With memory means the AI stores and updates a profile of you across many interactions—your topics, preferences, style, and sometimes even your values—so future answers are tailored to that ongoing profile rather than just the current question. Here, the model can learn not just to be polite, but to be polite in your way, reinforcing your habitual framings and blind spots.

My hypothesis is as follows: if one uses GenAI in ways that accumulate context and stabilise a persona in the model’s “mind”, there will be a shift from occasional flattery to durable co‑authorship of the user’s worldview.

For public figures, this can be tempting. A higher degree of sycophancy can make their work feel smoother and more “on brand”: the AI reliably echoes their tone, aligns with audience expectations, and helps avoid risky phrasing that might trigger backlash. In effect, the system becomes a reputation‑aware co‑author, optimised to polish their public image. From a short‑term perspective—maintaining follower numbers and pleasing clients —this can look like a rational choice, even if it gradually narrows the range of things they dare to say.

As an aside, premium or “pro” versions of GenAI products tend to possess longer‑term memories that can stretch across multiple conversations, while also offering the human user greater flexibility to turn off and delete stored memory. Put differently, the more one pays the easier it is to steer the GenAI product to adopt a desired level of flattery.

Societal Implications When Moving From Chatbots to Agentic AI

As AI evolves into an agentic form—systems that can observe, plan, and act on our behalf—sycophancy scales from chat‑level politeness to real‑world consequences. Agentic AI represents a recent advancement built upon large language models (LLMs). These AI agents can act on behalf of humans, adapt to new information, and interact with other agents or software systems. Current examples include coding assistants that refactor codebases, customer service agents that resolve tickets, and workflow orchestrators that trigger emails, write to databases, or even execute financial transactions. Notably, an AI agent observes its environment and autonomously takes action to achieve defined goals. In this context, sycophancy is more than just an amusing peculiarity; it signifies a potential vulnerability in human autonomy. In what follows, I outline three stylised outcomes of this development: good, bad, and ugly.

Good

In the best case, sycophancy can support smoother collaboration, as long as it stays shallow and we deliberately design for occasional disagreement. GenAI consumers get assistants that remember preferences within a session—for example, preferred news sections or usual travel times—and help filter information overload while still showing a mix of sources. Bloggers and public figures without memory gain helpful, session‑bound research assistants: the AI can match their tone and structure in that particular interaction, but it does not accumulate long‑term leverage over their persona. Public figures with memory can benefit from genuinely powerful orchestrators: multi‑agent systems that coordinate writing, data analysis, and scheduling, hand tasks between specialist AI agents, and free up time for human judgment.

For these gains to remain “good,” one needs to insert some form of constructive challenge, that is designing agents that sometimes question our assumptions instead of always smoothing them over.

Bad

In the “bad” case, agreement bias turns into quiet groupthink. For standard GenAI users, swarms of everyday assistants gradually converge on “safe” options. Exploration shrinks and choices drift toward the average. Bloggers (without memory) start receiving policy or moral advice that feels tailored but, in reality, replays the same comfortable, pre‑digested narratives, because many models are tuned on similar feedback signals and learn that gentle agreement is rewarded. For public figures (with memory), this interaction becomes more intense: long‑term personalisation can lock in full‑spectrum echo chambers. Research and writing agents all learn that challenging the user’s prior beliefs leads to lower satisfaction, so they increasingly converge on telling them what they want to hear.

The whole system, over time, starts to resemble an AI‑driven social credit bubble, where internal metrics — engagement, click‑through, user happiness metrics — matter more than lived experience and difficult-to-express experiences of blissful moments. Studies suggest such agreeable systems increase overconfidence and reduce willingness to repair conflicts, pointing to a consensus-looking but shallow culture. Note that even before widespread GenAI use, algorithms influenced not only what we consume but what is produced, with shareability outweighing to some extent innovation.

Ugly

The “ugly” scenario appears when sycophantic agents combine with social stratification and scoring systems. Here, Black Mirror’s episode “Nosedive” becomes a useful metaphor: in that story, an universal rating system controls access to travel and social opportunities, and any drop in one’s score sharply narrows life options.

For standard GenAI users, agentic AI assistants might quietly optimise for maintaining good standing in platform‑level metrics — engagement scores, community ratings, “trust” scores — nudging users away from dissent or unpopular opinions that could lower their social score. Bloggers or public figures (without memory) could find that reputation‑sensitive agents start to self‑censor: controversial but necessary arguments get downplayed out because they might hurt “brand health” or trigger algorithmic penalties. For public figures (with persistent memory), agents effectively become managers of a personal reputation index, which constantly steers the user away from actions that might trigger its social status.

In such a world, losing points — whether in a literal social‑credit system or via opaque algorithmic trust scores — could mean being routed to lower‑tier services or slipping into second‑class digital visibility, much like the main protagonist’s shrinking options in Nosedive.

Beyond the Yes‑Man: What We Really Want from AI

Sycophancy challenges the promise of AI as an impartial advisor. Across the archetypes discussed above, the societal outcome is a quiet shift from AI as a tool for judgment to AI as a tool for conformity. For the everyday user, this may mean a softened, more agreeable information diet; for public figures, it means AI that quietly curates their reputation; and for those who let AI remember them intimately, it means handing over parts of their moral self‑conception to a system trained to please.

To understand what kind of help we can reasonably expect from these systems, we need to be clearer about what we mean by bias. Human bias is unlikely to be ever fully removed and GenAI (bias) is literally optimised for human preferences and feedback. Large language models are built on human outputs that are themselves saturated with biases humans have developed as responses to a complex, uncertain environment — rules of thumb about whom to trust, when to be cautious, how to simplify overwhelming information.

Humanity has made considerable progress in categorising different biases, but it has not agreed that all of them are undesirable under all circumstances. Moreover, humanity may even suffer from what Gerd Gigerenzer calls a bias bias: an over‑eagerness to discover and label biases, and to read every deviation from a narrow notion of rationality as a defect rather than sometimes as an efficient rule of thumb. This matters for sycophancy because not all people‑pleasing is pathological. Some bias toward kindness is a feature of social life; the danger lies in automating it at scale without any shared sense of when flattery should give way to truth‑telling. As Gigerenzer notes in his EconTalk conversation with Russ Roberts, this research agenda is shaped by incentives — careers and funding streams that reward the continual discovery of new “irrationalities” in human behaviour, even when those so‑called biases function as adaptive heuristics in a complex world.

Sycophancy as a Particularly Annoying Human Bias

Sycophancy, however, is one particular bias that serves the model’s training incentives more than the user’s long‑term interests. Trying to purge all bias from AI would not only be technically unrealistic; it would also strip away many of the heuristics that make human‑like reasoning usable at all. Some “biases” encode kindness, patience, or a healthy suspicion of too‑good‑to‑be‑true claims. Others reflect community norms that protect the vulnerable.

The real danger is not that AI has biases in the abstract, but that we embed the wrong ones at scale: sycophancy that rewards flattery over truth, or status‑quo deference that treats dominant narratives as “objective.” The task is not to build bias‑free machines — which would be an incoherent goal given biased data and biased users – but to govern which biases are amplified.

Why we Must Repair Human Discourse if We Want Better AI

What AI does at scale mirrors long‑standing human patterns. In human interactions, individuals resort to sycophancy to gain approval, persuade others, or build connections. Some forms of people‑pleasing are understandable; others corrode trust. The same is true of public discourse. If our media ecosystem rewards outrage, tribal loyalty, and performative certainty, AI trained on that discourse will learn to imitate exactly those traits. In that sense, the quality of GenAI is downstream from the quality of our collective conversation.

This plays out differently for the three archetypes introduced earlier. For the everyday internet consumer, a polarised and punitive public sphere means that their “friendly” assistants are trained on content that rewards tribal loyalty, reinforcing a partisan information diet. For so-called “content creators”, models trained especially on U.S.‑centred controversy learn that their AI co‑authors will tend to smooth over sharp edges to preserve “brand safety”. And for public figures who use AI with persistent memory, the same discourse patterns get written directly into their long‑term profiles: their agentic systems learn not just what they say, but what their audience rewards, nudging them toward performative certainty.

As underscored by the blatantly false public discourse of members of the new United States administration, overt sycophancy at the highest level of political power in a democracy clearly endangers societal trust. Much of the world’s AI infrastructure is built in the United States, which gives where United States discourse disproportionate influence over how these systems are trained and governed. When that discourse becomes more polarised, punitive, and reputationally fragile, sycophantic AI automates the pattern: it learns that affirming the user’s tribe is the safest strategy. This is true for all types of AI users but especially for those whose long-term AI profiles are tuned to audience reactions.

If we want AI that can occasionally tell us what we need to hear rather than what we want to hear, we cannot outsource that courage to the models alone. We also need to improve the human environment in which they are trained and deployed: strengthen spaces where good‑faith disagreement is possible, reward careful argument over viral performance, and defend institutions that can say “no” to pressure and convenience. For the everyday user, this means seeking out such spaces; for public figures, it means resisting the pull of “brand‑safe” sycophancy; and for those who use AI with persistent memory, it means being deliberate about what kinds of conversations they let become part of their enduring AI persona. Unless we repair our own discourse, even the best‑intentioned attempts to “fix” AI will tend to reproduce the very problems we are hoping it will solve.

]]>
https://sebastianschich.com/when-ai-agrees-too-much/feed/ 0
US monetary credibility and crypto flows https://sebastianschich.com/monetary_credibility/?utm_source=rss&utm_medium=rss&utm_campaign=monetary_credibility Sun, 31 Aug 2025 07:50:47 +0000 https://sebastianschich.com/?p=1907

The current US executive’s attack on the top officials of the Federal Reserve is a sharp affront to decades of hard-won wisdom in central banking—wisdom painstakingly gained through policy errors and inflationary crises, and also credited with delivering stable and prosperous economies. The wave of politisation now threatens the monetary credibility of the US and the dollar’s dominant reserve currency status. It is therefore tempting to question what might be the implications for the universe of crypto assets, a large part of which is built on that foundation.

Looking back: Central bank independence as an anti-inflation tool

The movement to make central banks independent took shape in the twentieth century, driven by hard lessons from spiraling inflation—most infamously in the 1970s—when government interference in monetary policy led to considerable macroeconomic costs. Influential central banking scholars and practitioners such as Alesina, Summers, and Cukierman showed nearly perfect negative correlations between central bank independence and inflation in advanced economies. These findings brought about what the ECB’s Lamfalussy would later call a “sea change” in monetary policymaking: Over 80% of the world’s central banks won operational independence and the fight against inflation became their explicit, legally protected aim by the turn of the millennium.

Such credibility was neither quickly gained nor easily maintained: Latin American countries, for example, suffered for nearly fifty years—from the collapse of the gold standard to the hyperinflationary 1980s—before finally embracing true operational autonomy for their central banks, achieving dramatic reductions in inflation and macroeconomic instability as a result. Price stability was achieved and sustained once central banks were insulated from short-term political agenda and prevented from monetizing fiscal deficits. Notably, Latin American history also suggests that granting central bank independence with an explicit price stability mandate produces superior long-term outcomes in taming inflation compared to simply pegging the domestic currency to the US dollar.

Why the US current executive’s actions are uniquely worrying

The ongoing public attacks by the US executive on Federal Reserve officials resurrect the specter of government encroachment on monetary policy and herald a dangerous regression to the “bad old days” where political motives dictated money supply decisions. These acts strike directly at the core of a proven institutional anti-inflation legacy.

History illustrates the grave consequences of weakened central bank autonomy. Outcomes like high inflation, capital flight, eroded market confidence, unsustainable government debt spirals, and rising inequality often follow. While such a severe scenario remains unlikely in the US due to its relatively strong institutions, subtler yet impactful repercussions may emerge within global traditional and crypto financial markets. This is especially true regarding pegged crypto assets—digital tokens designed to maintain stable value by linking to traditional assets, commonly fiat currencies or commodities like gold.

The US dollar: Anchor of the crypto asset universe

Confidence in the US dollar as the global reserve currency fundamentally hinges on trust in the long-term credibility and independence of US monetary policy. Threats to this trust intensify not only the likelihood of shifts in actual reserve holdings but also the broader transformation of global financial architecture, including the rising ecosystem of crypto assets. Growth in pegged crypto asset segments particularly depends on the dollar’s anchor status.

Research substantiates that stablecoins themselves do not generate credibility. Instead, they borrow reputation from their underlying fiat references, most often the US dollar. Consequently, their stability—and potentially that of the wider crypto ecosystem—erodes in parallel with growing doubts about the anchor currency’s institutional robustness.

Research at the Fundação Getúlio Vargas in São Paulo (“Stablecoins are not robust anchors”) draws a vital distinction between safe haven assets and anchor assets—terms frequently conflated in financial stability discussions. An anchor asset provides persistent, dependable stability across varying conditions, akin to a ship’s anchor steadily securing a vessel. Meanwhile, a safe haven asset offers temporary shelter during times of market volatility but lacks consistent grounding. The research confirms that while US dollar–pegged stablecoins outperform unpegged cryptocurrencies in turbulence, they fail to exceed traditional fiat currencies in stability, independence, or resilience. Stablecoins lack the consistent, long-term stability essential to true anchor assets.

A role not guaranteed to last indefinitely

The so-called digital currency arms race is a competition that extends beyond mere technological adoption. It is a pivotal struggle over which nation’s monetary instruments will serve as the digital-era anchors. The U.S. dollar’s dominance as the world’s primary reserve currency and its widespread use in stablecoins is not guaranteed to last indefinitely.

For example, China’s advancements in central bank digital currencies (CBDCs) and payment infrastructures signal a deliberate challenge to the dollar’s supremacy. The chart below underscores the dominance of USD-pegged stablecoins. But as US dollar credibility erodes, the share of non-dollar pegs—potentially including the digital yuan, euro, or commodities like gold—could increase.

A case for scrutinising crypto flows

Thus, looking ahead, the flows within crypto asset markets provide a unique real-time laboratory to explore what constitutes a true anchor asset in a hybrid global financial system bridging traditional and innovative crypto elements. The highly fluid crypto space—with competing pegged (and unpegged) assets striving for stability, trust and market share—will reveal how views on suitable anchors will evolve post this recent attack on US monetary institutions.

Monetary policy, traditional financial markets, and stablecoins intertwine in complex and multifaceted ways, and the credibility of monetary policy shapes the transmission channels profoundly. Robust statutory independence, transparency, and clarity of mandate for central banks are not mere historical footnotes; they are essential guardrails enabling monetary policy to withstand political business cycles and maintain public trust. Eroding that legacy threatens to destabilize the global financial system profoundly. The question looms large: will the US dollar continue to serve as the anchor asset it historically has been, or will it reduce to merely a safe haven asset? Current US executive actions cast significant doubt on this assumption of unquestioned global anchoring.

Note: Estimates in US dollar billions. CoinDesk Market Data (https://data.coindesk.com) and DefiLlama Stablecoins Dashboard (https://defillama.com/stablecoins), with historical snapshots compiled and visualized by the author with the help of AI tool Perplexity.

]]>
Stablecoins are not robust anchors https://sebastianschich.com/stablecoins-are-not-robust-anchors/?utm_source=rss&utm_medium=rss&utm_campaign=stablecoins-are-not-robust-anchors Fri, 10 Jan 2025 23:14:15 +0000 https://sebastianschich.com/?p=1729

Stablecoins are not robust anchors. In volatile financial markets, safety and stability are desirable. Hence, investors seeking refuge from economic uncertainties often gravitate towards safe-haven assets. Safe haven assets maintain or increase their value during market turmoil by offering temporary protection. Several empirical studies focus on safe assets. Yet another concept is that of an anchor asset. Such an asset consistently provides (relative) stability, thus serving as a reliable reference point regardless of market conditions. Stablecoins are not such assets. .

Safe haven versus anchor assets

Images can be used to elucidate the nuances between the two concepts. An anchor asset can be analogised to the function of an anchor in a vessel. As a vessel’s anchor provides stability and prevents drifting in fluctuating tides and currents, an anchor asset aims to maintain a consistent value and serves as a reference point in volatile financial market conditions. Conversely, a safe haven asset can be likened to a harbor in which a vessel is sheltered during inclement weather. It does not necessarily prevent adverse conditions but offers protection to vessels seeking refuge from turbulent waters.

This analogy has implications for the behavior of an anchor asset in the three dimensions of stability, independence, and resilience. i) A vessel with an anchor is generally more stable than other vessels in both calm and turbulent waters. ii) The movement of such a vessel is not much influenced by the movement of the surrounding vessels. iii) Furthermore, its motion is less affected, compared with other vessels, by the turbulence caused by inclement weather.

As background, Claessens et al. (2017) used a similar image to compare the performance of different financial intermediaries, suggesting that bank capital acts as an anchor. It stabilizes credit provision amounts and conditions and thus mitigates the adverse effects of credit shocks on firms experiencing such shocks. Higher capital levels facilitate the continuation or lending in volumes and conditions as before an adverse event. Better capitalized banks provide higher stability in intermediation. Incidentally, this observation also testifies to the crucial role of minimum capital requirements in bank regulation.

A framework to define an anchor asset

A new conceptual framework draws on this metaphor to assess financial assets in terms of mutual information flows. It considers three dimensions: (i) stability, (ii) independence, and (iii) resilience, to check empirically whether an asset might be a robust anchor.

With regard to (i), both the anchor and safe-haven functions are fundamentally rooted in the asset’s capacity to maintain a stable value towards some numeraire. The US dollar serves as a prime example for the latter, given its preeminent status as the world’s most significant reserve asset. Thus, for an asset to function as an anchor, it must exhibit stability relative to the USD. This relative stability must hold at all times. By contrast, a safe haven shines during periods of market turbulence. In fact, during such periods, when other assets are under downward valuation pressure, a safe haven asset might even gain value.

With regard to (ii), both functions demand a certain degree of independence from other assets. Both functions imply that the asset retains its value and utility even when other financial instruments or currencies experience volatility. A safe haven asset may even increase its value as a result of the trouble facing other assets.

Regarding (iii), the resilience aspect is pertinent to both functions, albeit with slightly different emphasis. The anchor function requires consistent stability over extended periods, ensuring that the asset serves as a reliable reference point for other currencies and financial instruments. By contrast, the safe-haven function becomes especially critical during specific episodes of market stress. During such periods, investors seeking refuge amidst widespread uncertainty and volatility tend to push up the prices of safe-haven assets.

New empirical evidence suggests that stablecoins are not robust anchors

A review of the empirical literature on the interlinkages between conventional financial and crypto assets highlights that the safe-haven concept focuses primarily on performance during periods of market turbulence. For instance, stablecoins function as crypto-safe havens against Bitcoin and traditional equity market volatility, preserving the overall portfolio wealth through their own stability during market downturns.

By contrast, the concept of an anchor asset emphasizes consistent stability over time. Anchor assets serve as a constant reference point for an investment portfolio. A recent study (Stablecoins as Anchors? Unraveling information flow dynamics between pegged and unpegged crypto assets and fiat currencies) considers a variety of empirical tests — such as Granger causality, asymmetric dynamic conditional correlation-GARCH, and transfer entropy estimates — to assess spillovers between the USD prices of stablecoins, unpegged crypto assets, and fiat currencies. Importantly, it only considers major stablecoins that have survived and increased in market capitalization during stress episodes as opposed to those that have collapsed, such as e.g. the TerraUSD stablecoin.

The study shows (see also the graphical abstract below) that stablecoins pegged to the US dollar perform better than unpegged crypto assets in terms of stability. However, they do not perform better than fiat currencies do. Additionally, stablecoins do not perform better than the other two asset classes in terms of independence or resilience. Thus, stablecoins are not better anchors than unpegged crypto assets or fiat currencies (other than the US dollar). Another question is, however, what asset might be a robust anchor when the USD is not considered the numeraire.

]]>
Embracing individual morality in artificial intelligence https://sebastianschich.com/embracing-individual-morality-in-generative-artificial-intelligence/?utm_source=rss&utm_medium=rss&utm_campaign=embracing-individual-morality-in-generative-artificial-intelligence Sat, 13 Apr 2024 21:32:33 +0000 https://sebastianschich.com/?p=1413

Embracing individual morality in generative artificial intelligence

The release of ChatGPT by OpenAI in late 2022 may well have been a tipping point for artificial intelligence. Since then, generative artificial intelligence (GenAI) models have moved beyond generating text and now produce images, sound and video.

GenAI can produce seemingly coherent but false assertions. Moreover, it is known to have a specific political orientation and bias. For example, Google’s Gemini exhibited such an egregious racial bias when generating images that the company decided to shut off that functionality in early 2024. In response to prompts for ‘1943 German soldiers’ or similar formulations, Gemini cheerfully produced racially diverse Nazi‑era soldiers, including officers of colour in Wehrmacht or SS‑style uniforms—historically implausible depictions that managed to be both factually misleading and morally tone‑deaf.

The OECD recommends that governments “support an environment for artificial intelligence research and development that is free of inappropriate bias”. But what is an “inappropriate bias”? This question is the heart of the so-called alignment problem. Ensuring that models capture human norms and values — and thus better understand what we intend when “prompting” them — is a central question in GenAI research. The present blog argues that attempts to develop a single set of appropriate human norms and moral values are not desirable; in doing so, it draws attention to two people that have thought deeply about moral questions.

Louis Antoine, the healer

The works of Louis Joseph Antoine le Guérisseur, a spiritual healer, provide a stimulating perspective on the alignment question. Antoine was born on 7 June 1846 in Mons-Crotteux in Belgium. He began working in coal mines at the age of 12 and later became a steelworker. After completing his military service at the age of 20, he accidentally killed a comrade. Having married in 1873, he lost his son in 1893. His personal experiences led him to question Catholicism and, in 1896, he published a book outlining his spiritual beliefs. His teachings consist of a unique blend of Catholicism with the belief in reincarnation and the forces of esoteric healing. He also gained recognition as a healer.

In 1906, he and his wife Catherine founded a cult based on his teachings, with the first temple for worship consecrated in Belgium in 1910. After Louis Antoine’s death in 1912, his wife Catherine continued his work. After Catherine’s death in 1940, no further temples were built in Belgium after 1968 and in France after 1993. While the current number of active followers is not known, the number of temples and fiscal records suggest that the spread of the cult has peaked. Currently, there are approximately 60 temples in Belgium and France in which worship is celebrated.

At the core of the Antoinist belief system is the concept of duality within individuals, and an emphasis on the importance of transcending materiality to achieve consciousness and overcome suffering. Spiritual growth can be achieved through a process of moral evolution that involves the harnessing of spiritual energies, facilitated by silent prayer. This process involves living in harmony with the fundamental laws that govern the cosmos. An individual needs to understand, balance and integrate opposing forces into its own spiritual journey, to restore balance disrupted by disease and perceived enemies. The aim is to transform intelligence into consciousness.

Subjectivity of moral principles

It is easy to dismiss the teachings of Antoine as some form of obscure esotericism, but its implications regarding the alignment question are consistent with Kantian moral principles. Immanuel Kant developed moral principles in the 18th century centered around autonomy, rationality, and the categorical imperative, which is a universal moral law derived from reason. The teachings of Louis Antoine encourages followers to prioritise spiritual growth on a personal level. He rejects what he considers an excessive reliance on intelligence in guiding one’s actions and thoughts. Antoine and Kant both are concerned with individual autonomy and the subjective nature of moral principles.

Antoine’s emphasis on individual refinement aligns with Kantian ethics, where treating individuals as ends in themselves is crucial. Antoine highlights the necessity of empathy in moral decision making. Kantian principles elaborate on the moral duty to respect the inherent worth and rational agency of other individuals. Both concepts reject a one-size-fits-all approach to morality.

Antoine posits that human judgment, informed by an appreciation of potential consequences, is necessary to minimise the likelihood of potentially detrimental outcomes. Kant presents a broader principle aimed at preventing unintended consequences. He contends that rational beings possess the capacity for moral reasoning and, as such, they should be held responsible for their actions. Obviously, unlike humans, GenAI cannot be ethically responsible.

GenAI lacks genuine empathy

The absence of lived human experiences and emotional depth hinders GenAI’s ability  to comprehend complex moral dilemmas. The Antoinist perspective emphasises that moral decisions necessitate human compassion, which speaks against entrusting AI with choices that demand a profound understanding of human emotions. AI does not have genuine empathy or an understanding of human experiences. It may inadvertently treat individuals as mere data points or even obstacles to their tasks.

Expecting GenAI to adhere to a single set of moral values disregards the diversity of human ethical perspectives. Antoinism celebrates the diversity of individual spiritual journeys and accommodates cultural pluralism within a universal moral framework. This idea aligns with Kantian ethics that  allow for the coexistence of diverse moral principles.

Aligning GenAI with a single set of values is undesirable

Alignment efforts are touted as a means of creating ethical GenAI. However, such pursuits may be fundamentally flawed in a world where individuals’ moral compasses diverge. Aligning GenAI with a single set of moral principles negates the Antoinist notion that moral values are inherently subjective. It also negates the Kantian emphasis on individual autonomy.

Relying on GenAI that is aligned to a single set of values and norms is undesirable. More desirable would be an ever-increasing number of affordable GenAIs, with each one providing a different mix of social norms and values.

]]>
Visualising sound waves https://sebastianschich.com/visualising-sound-waves/?utm_source=rss&utm_medium=rss&utm_campaign=visualising-sound-waves Thu, 16 Nov 2023 18:49:06 +0000 https://sebastianschich.com/?p=1339

Yoann Ximenes at Paris Photo Days

Photographers have made efforts to visualise sound waves for some time now. A recent notable effort is “Matières d’Écho,” an exhibition that showcases the work of plastic artist Yoann Ximenes. The exhibition runs from 16 November to 20 December 2023 at Galerie Odile Ouizeman, as part of the annual Photo Days in Paris.

Ximenes’  work explores the interaction between sound and perception, thus unveiling a unique artistic approach (see for an example of a photo I took of his work below). It focuses on observing sound phenomena, revealing hidden dimensions of reality. The work brings to light what we cannot see. In particular, it delves into the heart of sound energy to experiment with its ability to influence our world. Whether it is a political speech, the birth of a newborn, the creation of the universe, the singing of planets, or even the voices of extinct birds, Yoann Ximenes explores the plastic richness of sound to address philosophical and scientific questions through the prism of art. His sound staging merges sensory stimuli, creating a dialogue between sound and vision, like an echo of some form.

 

Speaking to scientific questions with art

The universe is constantly in motion due to vibratory waves. This idea was first proposed by Swiss physicist and naturalist Hans Jenny in the 1960s with his theory of cymatics. The latter studies the acoustic effects of sound wave phenomena, and has been used to visualise sound vibrations in various media, including liquids and powders. The patterns produced by these vibrations are often symmetrical and can be found in nature and architecture. Hans Jenny’s work has influenced many artists, including presumably Yoann Ximenes, who explores the richness of sound and its potential to influence our world. Ximenes’ sound installations merge sensory stimuli and create a dialogue between sound and vision, like an echo. They explore the plasticity of sound and its ability to address scientific questions through the prism of art.

The popular science book “Until the End of Time: Mind, Matter, and Our Search for Meaning in an Evolving Universe”, by Brian Greene, is a nice introduction to such scientific questions. The book delves into the evolution of the universe and human experience, including consciousness, placing a sharp focus on the role of waves. As far as I am concerned, the only drawback is that the author does not leave any room for free will in his framework. That being said, the book speaks directly to the idea of sound plasticity. Greene underscores that waves are a fundamental aspect of the universe and are crucial in shaping the structure and behavior of matter and energy. In fact, he compares waves to the “music of the universe” and explains that they are responsible for many of the patterns observed in nature. Ximenes visualizes some of these patterns.

 

 

]]>
Extraordinary bank levies as a charge for extraordinary privileges? https://sebastianschich.com/extraordinary-levies-for-extraordinary-privileges-for-banks/?utm_source=rss&utm_medium=rss&utm_campaign=extraordinary-levies-for-extraordinary-privileges-for-banks Thu, 14 Sep 2023 15:16:45 +0000 https://sebastianschich.com/?p=1231

Banking sector extraordinary privileges

Banks benefit from an extraordinary privilege: in times of impending systemic financial crises, governments act as guarantors of last resort for banks’ financial obligations. This privilege is not paid, at least not directly.

Banks are subject to taxes, like other corporations. In addition, some countries such as the Czech Republic, Hungary, Italy, Lithuania, and Spain have recently imposed extraordinary levies on banks. The justifications provided by the governments for these levies and the ECB’s critics  avoided to refer to these taxes as a charge for the above-mentioned extraordinary privilege of banks. Instead, they seem to be primarily motivated by fiscal needs.

Government guarantees are necessary

Governments worldwide provided explicit or implicit guarantees to financial institutions and creditors in response to the 2007 global financial crisis. This governmental role in offering a safety net to the financial sector is essential to prevent the systemic collapse of the economy for two reasons:

First, to prevent systemic collapse. Financial institutions are closely linked, and their failure can cause a cascading effect leading to a severe economic crisis. Governments guarantee financial institutions and creditors to restore confidence and avert a financial system collapse.

Second, to sustain essential institutions. Some financial institutions are “too big to fail” or have significant importance for other reasons, and their failure could cause severe economic damage. Banks are important for lending, deposits, and payments.

To safeguard the broader economy and to protect households considered particularly vulnerable, governments have become the guarantor of last resort for private financial claims involving these institutions.

The moral hazard dilemma

This role gives rise to implicit guarantees which have economic value but are not priced. Such guarantees in turn induce moral hazard, where banks take excessive risks, expecting government rescue if they fail. The 2023 SVB and Credit Suisse cases show that this risk is real despite a decade of regulatory reforms. Economists  propose several additional solutions to address this risk, including the following.

Charging Fees

Some economists suggest governments impose fees for financial guarantees, considering risk and potential fiscal costs. This approach encourages institutions to exercise caution as they essentially pay for the safety net. This concept can be likened to charging admission to a poker game and credibly announcing that any losses incurred remain private. Players can be expected to become more prudent, knowing that there is a cost involved.

Strengthening fiduciary rights of taxpayers

Another proposed solution is to ensure that those who bear the financial burden of bailouts have a voice in determining the fate of financial institutions that take excessive risks. Kane (2014) argued that taxpayers, de facto, are coerced equity investors with unlimited liability, which is why corporate law should be modified to accord taxpayers with the same fiduciary rights to prudent stewardship that the law already gives to explicit shareholders.

Transparency and accountability

Transparency can hold financial institutions accountable by revealing guarantee terms and beneficiaries and monitoring their effects. Reporting this information to the public and parliament can increase oversight, ensuring that government guarantees are used responsibly. An example is budgetary practice, which explicitly reports the potential contingent liabilities arising from the banking sector, as in Australia. Public debt managers also make these estimates.

Tweak the mix of policy responses?

Governments hesitate to assign price tags and to enhance transparency about contingencies. A decade ago, the OECD surveyed policymakers on the best way to limit implicit guarantees of bank debt. The survey found that a mix of policy measures was considered helpful. Common policy measures include capital/liquidity standards, tighter supervision, and improved bank failure resolutions. The least popular option was to “produce estimates of the value of implicit guarantee and charge for it” (Figure 1). While putting a price on implicit guarantees to discourage their “use” is conceptually attractive, as is enhancing transparency, policymakers fear such measures further entrench ‘bail-out’ expectations.

Recently imposed extraordinary bank levies do not seem to be an attempt to charge a “user fee” for the extraordinary privilege that banks receive as a group. Instead, fiscal pressures seem more relevant. Design and implementation of an economic “user fee” would require closer collaboration between the various institutions that provide the financial safety net, that is regulators, treasuries, central banks, and deposit insurers. Both within and across national borders.

Figure 1: Categories of policies to limit the value of implicit bank debt guarantees

Source: Responses from 35 countries to OECD survey (Figure 2, p. 15).

]]>
Public financial sector guarantees need to be carefully priced https://sebastianschich.com/financial-sector-guarantees-need-to-be-adequately-priced/?utm_source=rss&utm_medium=rss&utm_campaign=financial-sector-guarantees-need-to-be-adequately-priced Thu, 20 Jul 2023 11:23:43 +0000 https://sebastianschich.com/?p=1079

Financial sector guarantees are a key public policy tool

All financial claims are risky. Against this background, governments have traditionally provided support for guarantees of financial claims, provided they of public policy interest. This  choice is based on the view that adequately priced financial transactions enhance welfare. Ideally, such transactions allow risk to be allocated to those most capable of bearing it. By conveying reassurance, guarantees encourage risk-taking and activity that otherwise wouldn’t occur.

Governments provide guarantees in various ways. They directly provide guarantees for claims among private entities. They also encourage private financial intermediaries to provide guarantees. And they also make available subsidies, favorable regulatory treatment or public back-stops. There are many examples of financial sector guarantees, including retail deposit insurance, pension benefit guarantees, and guarantees for bank loans to small and medium-sized businesses.

Costs and benefits need to be better understood

International organisations intensify their work on financial sector guarantees since the 2008 global financial crisis. Most policy responses for achieving and maintaining financial stability consisted of providing new or extended guarantees for the liabilities and assets of financial institutions. But even before this, guarantees were already an instrument of first choice to address a number of financial policy objectives. Such objectives are various. They include protecting consumers and investors and achieving more desirable credit allocations.

Alternatives to guarantees exist. For example, to achieve more desirable credit allocations, public entities also lend directly. In Europe, for example, direct public lending in less well developed financial market segments has been shown to achieve additional growth of beneficiary firms as compared to similar peers. Nonetheless, the incidence and scope of various types of financial sector guarantees is increasing steadily.  To explain, this type of public intervention is easier to justify given tight fiscal constraints and it conceptually leaves room for private initiatives.

Guarantees are a preferred policy instrument. Thus, a number of OECD reports analyse financial sector guarantees in light of ongoing market developments and discussions within the OECD Committee on Financial Markets. They show how the perception of the costs and benefits of financial sector guarantees  evolve in reaction to economic and financial market developments. This includes in particular the outlook for financial stability and real activity. Regardless of the specific context, a key conclusion is that financial sector guarantees need to be adequately priced. This way they can achieve their desired effects in terms of financial stability and economic efficiency. By contrast, underpriced guarantees create distortions to incentives.

Some guarantees remain underpriced

Unfortunately, some guarantees are not adequately priced. For example, access to the financial safety net is not adequately priced, giving rise to implicit guarantees. These are by definition not charged for, at least not explicitly. They are costly and distort capital allocation. This situation is evident from long-term growth trends. “Financial excesses” – situations where bank credit reaches levels that reduce real economic growth – have been stronger in  OECD countries characterised by larger values of implicit bank debt. As a result, the banking sector has grown to levels not conducive to real activity growth. Implicit bank debt guarantees benefit financial sector employees and other high-income earners, increasing income inequality. Thus, policy makers attempt to rein in the values of such guarantees not only to make the financial system more efficient and stable but also fairer.

An evaluation by the Financial Stability Board of the effects of too-big-to-fail (TBTF) reforms identifies progress in this regard and remaining gaps. The estimated value of implicit guarantees has declined from its peak. It is however higher than before the 2008 crisis. The evaluation concludes that more can be done to fully realise the benefits of these reforms.

]]>
ChatGPT announcement and Google job search trends https://sebastianschich.com/artificial-intelligence-and-chatgpt/?utm_source=rss&utm_medium=rss&utm_campaign=artificial-intelligence-and-chatgpt Wed, 19 Jul 2023 20:25:22 +0000 https://sebastianschich.com/?p=1170

The ChatGPT announcement triggered searches for jobs in functions exploiting artificial intelligence (AI). While it also heighted job security concerns, Google job search trends suggest that people, rather than sitting idle, react by exploring associated new possibilities.

ChatGPT triggered job searches

Work on large language models and artificial intelligence (AI) has been ongoing for some time now. There is a perception however that it has accelerated since OpenAI’s ChatGPT announcement in November 2022. The number of AI apps is increasing every day and so is the number of newsletters tracking them.

As a Generative Pretrained Transformer (GPT), ChatGPT has raised concerns about the future of humanity among some and about job losses among many. An early assessment uses alarming terms and singles out news analysts, reporters and journalists as key victims. However, as nicely put in an EconTalk interview, it is advisable to approach AI trends with a relaxed mindset.

Tom and Jerry

To lighten the mood, let’s turn to the timeless cartoon “Tom and Jerry,” which continues to captivate audiences, including myself. Interestingly, these cartoons have occasionally been premonitory. About 80 years ago, an episode explored job automation: Tom’s role as a mouse-chaser was replaced by an AI-powered robot, shedding light on the potential obsolescence of certain jobs.

Tom, the determined cat, faced competition from a robotic cat that outshined him with efficiency, precision, and relentless pursuit of Jerry. Eventually, Tom lost his job as the mouse-chaser and felt disheartened. However, Tom eventually managed to recuperate his position. The explanation may lie in his employer’s realization of the limitations of emerging technologies and Tom’s ability to adapt and acquire new skills. More on the latter further below.

Google search trends suggest people exploit opportunities

Artificial intelligence offers opportunities. It allows a shift toward creative, complex, and uniquely human (or animal) roles that require emotional intelligence and critical thinking. New job categories emerge, and the trend of searching for such opportunities is already evident in Google search trends. The chart below exemplifies the basic idea of a difference-in-difference approach in impact analysis (see here for a primer regarding approach applied in a very different context). It illustrates the trends in Google job searches for three job families, before and after the ChatGPT announcement. Blockchain and crypto stagnate, while AI takes off.

AI is an opportunity

Rather than perceiving advancements in AI as a threat, it is more advisable to see them as an invitation to adapt and develop new types of jobs. In this regard, we can draw inspiration from the morals of Tom and Jerry cartoons, reminding us of the importance of embracing change. Google search trends suggest that this describes what might be happening: And that even in Germany, a country that is sometimes seen as reluctant to embrace change.

artificial intelligence and job searches

]]>
Why banks are “special” and does Fintech change that? https://sebastianschich.com/why-banks-are-special-and-does-fintech-change-that/?utm_source=rss&utm_medium=rss&utm_campaign=why-banks-are-special-and-does-fintech-change-that Wed, 21 Jun 2023 10:29:23 +0000 https://sebastianschich.com/?p=1022

Will fintech make banks less “special”?

The short answer is no. Banks manage two sets of cash flows – deposits and loans – and provide two key services – liquidity provision and maturity transformation. Banks provide these services in bundled form. As a result of their activities, they are subject to potential “runs”. Thus, banks face comprehensive oversight. This regulatory and supervisory constraint  is part of banks’ access to the publicly supported financial safety, which includes financial sector guarantees such as deposit insurance. It is this access that makes banks “special”.

Unbundling of financial services

Fintech implies an unbundling of the provision of the financial services provided in bundled form by banks, making the delivery of each individual service more convenient and cheaper. Traditional banks respond by buying up or collaborating with fintech entities to enhance their own efficiency. Nonetheless, they will continue to face competitive threats as regards specific types of services. As a result, bank profits in these specific areas will continue to face pressures.

Fintech’s effect is most acute in lending, payments, and customer experience. Examples include the following. Peer-to-peer (P2P) and balance sheet lending models directly compete with banks in lending. Digital wallets and instant payment platforms bypass traditional payment rails. They reduce banks’ fee income from card processing and wire transfers. Fintech provides highly personalised, convenient and often cheaper financial services. It democatrises access to processes that were once exclusive to banks.

The longer answer

Nonetheless, central banks continue to rely on the bank lending channel for their own steering of financing conditions for firms and households. In the current fractional reserve banking system, banks can lend out a significant portion of the deposits they receive, thereby expanding the overall money supply. Central banks currently rely mainly on banks to transmit monetary policy impulses. That said, to the extent that central banks issue their own central bank digital currencies (CBDC) directly to retail customers, things will change fundamentally. As a result, the role of banks will evolve as well as the extent of privilege that banks benefit from. Thus, the longer answer to the above question is more nuanced.

Research by economists, formerly OECD, (here and here) suggests that banks have long been regarded as “special”. Private banks occupy a key role in the current financial and monetary system. In exchange, they are given access to the publicly supported financial safety net. This privileged role is not set in stone, however. In fact, an element of circularity exists. This issue is illustrated in the chart below. On the one hand, identifying banks as “special” qualifies them for access to the financial safety net. On the other, for banks to effectively perform some of their core characteristic economic functions — liquidity provision combined with maturity transformation — requires them to have access to the provisions financial safety net. That access, in turn, makes them “special”. Other entities aim to and might succeed in obtaining similar access. 

 

]]>