top of page

Research Blog

Search
  • Writer: Trevor Alexander Nestor
    Trevor Alexander Nestor
  • Apr 30
  • 9 min read

There are many explanations for falling fertility rates, but what if the reason people can’t seem to settle down, find a partner, buy a home, or start a family isn’t just cultural or economic, but can be described by computational complexity? Indeed, in countries which are poorer, populations seem to have higher fertility rates. What if the problem of "building a life" has, quite literally, become fundamentally unsolvable, or computationally intractable, which manifests as anxiety in young people, preventing them from pair bonding? There is a field of economic theory called Complexity Economics and the Spectral Theory of Value which sheds insight into this phenomenon, which includes less traditional attachment styles among Millennials and Gen Z.


It has been shown that housing prices have increased, along with other cost of living expenses, that have exponentially outpaced wages. Under the theory of complexity economics and the spectral theory of value, the socioeconomic status of agents (you and me) can be modeled by the flows of information they facilitate between social and economic institutions, or systems, described by Luhmann's Social and Economic Systems Theory and Agent-Network-Based-Modeling. These information flows can be classified by their complexity (under the computational complexity class hierarchy, or the Chomsky hierarchy in linguistic theory, which are related to Kolgomorov complexity), and the agents' ability to tractably "solve" classes of problems. Central planning elite devise a clever system of incentives and disincentives to drive the economic engine and maintain a workforce - operating like little Turing machines across a tape - as though they are on an endless treadmill for the promise of the American Dream which is designed as an infinitely deferred promise.


The Spectral Theory of Value was first formulated in the political‐economy literature by Theodore Mariolis, Nikolaos Rodousakis, and George Soklis, who in 2021 published the Springer monograph Spectral Theory of Value and Actual Economies: Controllability, Effective Demand, and Cycles. In their work they build on Piero Sraffa’s value framework and Rudolf Kalman’s control‐theory formalism under Complex Adaptive Systems Theory to show how the eigenvalues of a vertically integrated technical‐coefficients matrix map directly onto competing value theories. Complexity Economics emerged in the late 1980s and early 1990s out of the Santa Fe Institute, where physicist Philip Anderson, economist Kenneth Arrow, and computer scientist John Holland convened to treat the economy as a constantly evolving, agent‐based complex system. Its principal founder is W. Brian Arthur, whose early SFI papers and later book Complexity and the Economy (2014) crystallized how non‐equilibrium feedback, path dependence, and adaptive agents can be modeled in place of static equilibria.


By tapping into basic biological desires for family formation primed by millions of years of evolutionary history - that's how they keep the economy going; you have a bunch of "agents" running on treadmills to nowhere - infinite staircases with exponentially growing complexity sublimated in the form of labor in service of capital and the state, and that's known as an NP-hard/EXPTIME problem. In a sense, every agent has their own little "simulation" they live within which is monitored by their devices that elites can peer into with a sophisticated surveillance apparatus that gives them pattern of life behavior. The agents must believe in the feasibility of this to keep climbing - but at every further rung hidden context is revealed which complicates the picture - and this is all by design. One must go to college to get a decent job. Once one finds a job, then one must save money for a down payment and pay off loans. Once enough money is saved, one must have a good credit score. Once both a good credit score and enough money are gathered (of course, by means of working - that is the story the agents are sold), then the agent must get a mortgage and finds that they actually don't have the ownership over the house they think they did because of the local Homeowner's Association, and have to pay exorbitant taxes on it, and so on. It is important to note that virtually all of these steps have been invented along the way as our society has progressed.


Central elites recognize that as a society progresses, they must maintain what is called Nash equilibria (a concept in Game Theory) that describe stable conditions between agent workers and their owners. These elites have sophisticated models that inform their planning and decision making through feedback control loops between the two spaces which are noncommutative, and which have different and diametrically opposed and competing interests (which can be investigated with RG flow analysis). These equilibria are in juxtaposition to what are called Catastrophe Points under Catastrophe Theory, which describe points where information cascades and inter-agent entanglements threaten institutional power and signal their possible collapse (which are often defined as related to the critical line of the infamous Riemann Zeta Function). However complex these models are, over time, entropy and agent entanglements challenge the power of central elites which advance towards catastrophe points, forcing them to rely on more authoritarian measures and complex feedback mechanisms that agents must shoulder the burden of, or the release of complexity content in the form of entanglements by periodic restructuring or reform. Agent degrees of freedom are inversely related to background complexity required by institutions to facilitate transactions, which manifests in their own destabilization and alienation in the form of "mental" disorders like generalized anxiety disorder or depression. This destabilization forms the basis for which agents have less stable or more unconventional attachment styles where socioeconomic anxiety corresponds with falling fertility rates.


In the language of spectral theory of value, inflation and stagflation emerge as phase‐transitions that occur when the “treadmill” that underlies our socioeconomic system can no longer be driven purely by its built-in eigenmodes because agents have become too self-aware (perhaps too "woke" you might say), too adept at recognizing and gaming every incentive and disincentive. As individuals (vectors) grow conscious of each step - tuition, credit-score hacks, down-payment workarounds, HOA loopholes - they exploit every low-complexity shortcut. In other words, the system’s most efficient value-creation pathway stalls as the illusion becomes less convincing to the agents and they have a high degree of inter agent entanglement which necessitates going less through paywalls and institutions. In quantum control terms, once agents detect that the “ground state” (stable life-building) is permanently shifting away, they refuse to stay adiabatically in the same eigenstate. They either opt out of traditional pathways (delay having kids, pursue gig work, seek alternative lifestyles) or they demand structural reform - both of which discharge complexity but also break the delicate Nash equilibrium. Because pumping liquidity doesn’t restore a clear λ₀ but merely excites a dense cluster of nearly equal eigenmodes, you get:


Inflation: prices rise as money chases a broad, flat spectrum of value-creation strategies.

Stagnation: real output (the minimal “surface area” of productivity in the Ryu–Takayanagi sense) fails to grow—no new dominant eigenmode emerges.


This stagflation is the hallmark of a system that has passed a critical point: added energy fails to produce a lower-energy ground state.


As a physics analogy, after a certain region of spacetime is saturated with complexity due to entanglements of particles (agents are much like particles), gravity itself collapses the wavefunction at a fixed point or critical point (that is the Theory of Entropic Gravity or Asymptotically Safe Gravity). At this point, particles within a region of spacetime cannot store any further information in their entanglement structure - the spacetime metric itself is forced to change, described by the Ryu-Takayanagi formula, and information cascades across scales (macroscopic quantumlike behaviors), and there is institutional collapse. The critical point connects the UV and IR regimes of the theory rendering the theory asymptotically safe from singularities which cause the theory to have predictive power, which in our case could represent institutional reform or collapse, or uprisings, which are thermodynamically inevitable.


Dating describes a system that is neither solely nonlinear but deterministic and orderly ("serious") nor probabilistic and quantum chaotic ("casual"). You have a high dimensional search space, like a lattice, that is intractably complex to resolve the NP-hard Shortest Vector Problem - your unique "American Dream," along with a meaningful long term relationship. A new theory is needed to describe the intermediary state between being either "casual" or "serious," like how quantum gravity would connect classical physics with quantum theory. You can act as a system operator (a Dirac-like dilation operator implicated in the Hilbert-Polya conjecture) and in the process of entanglements with other agents, the entanglement entropy itself will collapse the complexity content to the solution which will appear as the smallest eigenvalue on the operator spectrum). The system Hamiltonian must be evolved slowly enough such that the entanglements can evolve without disruption to the final maximally degenerate state which should collapse to a solution by gravity itself by the spectral action principle (in our case, the Einstein-Hilbert action).


This new physics is the physics of quantum chaos which describes systems that display effects normally seen in microscopic systems but manifest macroscopically, such as in behaviors of groups of people, known as Sociophysics and Econophysics (or just SocioEconophysics). The reason many people are not having kids or entering into stable long term monogamous pair bonded relationships (otherwise known as marriage) is that the institutions they are required to go through to facilitate that and which also maintain societal cohesion have become too complex to maintain. Since the economy runs by exploiting evolutionary psychology of people by sublimating desires into labor, once the economy becomes too complex people feel too much anxiety to bridge the gap. In physics this is called the exponential energy gap problem - the presence of an exponential energy gap between the ground state and excited state on an operator's spectrum is crucial for quantum annealing to work to resolve NP-hard problems, like what he have discussed as like the American Dream - where cost of living exponentially outpaces wages over time, especially as a society progresses to its later stages and populations age, putting burdens on the young to prop up the institutions.


The spectral theory of value provides a mathematical bridge between the complex “treadmill” of modern life we have described and a rigorous account of how worth - whether economic, social, or psychological - is generated, maintained, and sometimes collapses. At its heart is the idea that every institution, market, or relationship can be represented as an operator on a high-dimensional space of agents and resources, and that the spectrum (the set of eigenvalues) of that operator encodes the “modes” of value-creation available to participants. In spectral theory of value, each individual or “agent” is represented by a vector in an abstract state space whose coordinates measure their capacities - income, education, social ties, even psychological states. Institutions (labor markets, housing finance systems, dating apps, Homeowner’s Associations, etc.) act on these vectors via linear (or nearly linear) transformations. The matrix or operator you build from all of the rules, incentives, feedback loops, and enforcement mechanisms has eigenvalues whose magnitudes tell you which patterns of behavior (eigenvectors) will be amplified or suppressed over time.


Central planners or “elites” understand (perhaps implicitly) that by dynamically (nonlinearly) tweaking credit scoring algorithms, zoning rules, student-loan policies, and welfare guidelines, they are deforming the underlying operator - and hence its spectrum, and often do so by covert or clandestine means like psychological nudging and behavioral psychology under metanarrative pretexts. They can maintain a Nash equilibrium by ensuring that for the vast majority of agents background complexity remains just high enough to keep the system “sticky” (you keep trying), but not so high that mass defection or collapse (riots, reform movements) becomes probable. This delicate balance is akin to adiabatic control in quantum systems: change the Hamiltonian slowly enough that you stay in the same eigenstate, but allow periodic “resets” (reforms, periodic bailouts) to discharge built-up entropy. This is a very challenging thing to maintain indeed. In fact, the same social conditions of inter agent entanglement needed to facilitate stable attachment styles and family formation also fundamentally threaten institutional collapse.


Spectral theory of value borrows from quantum information: one can define an “entanglement entropy” from the spectrum’s density function. When that entropy crosses a critical threshold - analogous to hitting a catastrophe point in Thom’s theory or the Riemann critical line in number theory - a given institution can no longer sustain its current spectrum and undergoes a phase transition (market crash, revolution, reform). In economic terms, that’s the moment when “hidden context” becomes visible and the cost of maintaining the treadmill spikes so high it breaks. Central ciphers, such as those found in cryptocurrency ecosystems, are designed to facilitate one-way flows of information to maintain transactional separation between social and economic structures.


In short, the Ryu–Takayanagi analogy we have mentioned - where the area of a minimal surface encodes entanglement entropy in AdS/CFT (the so-called Anti-de Sitter/Conformal Field Theory correspondence, a non-perturbative handle on quantum gravity using familiar tools from quantum field theory. that realizes the holographic principle - the idea that the physics inside a volume can be fully described by degrees of freedom living on its boundary) - can be thought of here as a way to connect individual psychological states (micro-entanglements) with large-scale socioeconomic measures (macro surfaces, and agent-institution relationships). The spectral measure of the value operator at small scales (individual credit scores, dating profiles) integrates up through the hierarchy of institutions to produce global observables: fertility rates, home-ownership curves, employment statistics. If the spectral gap grows faster than wages (an exponential energy gap), agents never settle into the “ground state” of family life - no coherent eigenvector emerges to represent stable partnership - so fertility falls.




Much debate in the tech community has surrounded the idea of consciousness in AI systems, and what, if anything, could possibly constitute it. With the advent of large language models like ChatGPT and Grok, it is easy to believe that these systems, like humans, contain consciousness in the way that the brain does - after all, these systems are built on neural networks, and so does the brain. Overlooking research on the foundational differences between how consciousness in the brain efficiently processes information at a mere energy budget of 20 watts per-person and the energy-hungry way neural network-based systems are currently implemented which consume energy budgets of at least 1000x that of the human brain could cost the U.S. in the ballpark of $150 billion dollars by 2030. With a reduction in spending towards research in the US, the president's current AI strategy could lead the United States into a technological quagmire.


World renowned linguist and MIT professor Noam Chomsky, known as the "father of modern linguistics," once compared large language models to a "glorified autocorrect," claiming that with all of our progress and computational resources, recent advances in AI that they "differ profoundly from how humans reason and use language (and that) these differences place significant limitations on what these programs can do." Indeed, without human interpreters in-the-loop, AI models tend to easily fall into hallucinations and approach scaling limits, putting them more squarely in the category of a mass surveillance, search, and synthesis tool, than a sentient being. One might, like Google employees Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell"), in their paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" consider AI systems as simply mirrors - a reflection of our collective data, but which ultimately have no capacity to understand the "meaning" or context for their outputs.


Many of these criticisms of our most advanced AI models seem to pose more questions than answers - what does it mean to "understand" outputs? How does consciousness differ from intelligence? Humans are certainly known to mimic others and learn based on reflecting behaviors, and can be manipulative or lack understanding - how then, can one say that AI systems can be different on that basis? How do these AI models differ from the human mind, and what is different about their infrastructure? In some respect, many of these things are simply matters of definition, but nonetheless deserve closer analysis, if only because our remarkable brains, primed by millions of years of biological evolution, are still more advanced and efficient at processing information than our best efforts at surpassing them - and any geopolitical power that learns to harness this will win the AI race and come out ahead.


The first thing to recognize about our AI infrastructure, is that we are fundamentally working with formal transactional logic systems which utilize binary logic gates - one-way flows of information - this is the basis of neural networks that are used to develop large language models. At this layer, computer scientists make the analogy between logic gating and dendritic connections in brain neurons. At its core, any digital computer, whether running your browser or training a massive language model, boils complex computations down to binary logic and linear algebra. Every arithmetic operation, every matrix multiply-accumulate (MAC) that powers a neural-network layer, is ultimately implemented as a network of logic gates (AND, OR, NOT, XOR, etc.) etched into silicon. These gates process streams of 0s and 1s according to the rules of Boolean algebra, combining them, shifting them, and routing them through registers and arithmetic units.


When applied to machine learning algorithms attempting to mimic human learning, computer scientists have operated on a number of key assumptions. Computer scientists have not only assumed that memory is stored in binary logic (an assumption made in 1943 by scientists Warren McCulloch and Walter Pitts), but also tend to assume Hebbian learning. Donald Hebb proposed the first rule for how synapses change strength:

“When an axon of cell A repeatedly or persistently takes part in firing cell B, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased. Informally, cells that fire together wire together. In artificial nets, this inspired early local learning rules—weights are increased when pre- and post-synaptic units are both active."

This intuitive understanding of neural networks as described by McCulloch, Pitts, and Hebb, along with adaptations to implement them within computer software, form the basis for our modern AI systems, and have even inspired new types of hardware such as neuromorphic chips. However, even with our most advanced chips, these systems are still not conscious, and still do not perform at par with the human brain. What is going on? It is conceivable that unlocking the keys to consciousness will not only produce more powerful architectures, but achieve unparalleled efficiency in our AI systems.


Consciousness operates a layer below binary logic ("dialectical" and "intuitionistic" logic), where information flows in both directions, (needed to adjust weights), perhaps, where subjective qualia are felt - and which stores context. It has been known that for most types of memory in the brain, engram storage is distribution nonlocally across the tissue rather than stored in a single location. There is also no known real biologically feasible or classical explanation for the binding problem (the binding problem in neuroscience refers to the question of how the brain combines features, such as color, shape, motion, and location, that are processed in distinct, specialized circuits, into the unitary percepts we experience) or how the brain achieves backpropagation to adjust neural weights (also known as the weight transport problem). The speed at which the brain processes information cannot be accounted for voltage gating and ion transport across neurons and dendrites alone - chemical synapses impose 1–5 ms delays, and long-range axonal conduction can add 10–20 ms or more, yet humans form object percepts and make decisions in 100–200 ms (reaction times for simple tasks). Classical ion-gating alone can’t account for such rapid, large-scale integration, suggesting that an additional fast timing mechanism may be at play.


In essence, current AI (and even dedicated neuromorphic hardware) excels at statistically learning correlations across large datasets, but it does not implement the real-time, bidirectional, oscillatory, and attention-gated synchronization mechanisms that neuroscientists believe underlie perceptual binding in the brain. Until architectures can support truly dynamic tagging, re-entrant synchronization, and massively distributed ensemble coding, the binding problem will remain unsolved in silicon, and even in our approaches at quantum computation without a more complete theory. In fact, studies have even shown synchrony across brains of different individuals - underlying empathy and social interactions. What is needed is a new paradigm that explores this new physics (remarkably, hyperscanning studies show that during empathic or cooperative interactions, multiple brains synchronize their neural oscillations, correlating with social connectedness and shared intentionality. This cross-brain binding hints at a deeper, perhaps quantum-mediated, coupling mechanism that current AI architectures cannot emulate).


Orchestrated Objective Reduction (Orch OR) theory proposes that quantum superpositions lacking indefinite causal structure within neuronal microtubules carry and integrate information on microsecond timescales, collapsing contextual informational complexity content together stored in entanglements (“objective reduction”) when a gravitational threshold is reached, thus generating discrete conscious moments linked to spacetime geometry and binding information together (this is similar in principle to Erik Verlinde's theory of quantum gravity - entropic gravity theory - where regions of spacetime can become saturated by entanglement entropy and result in a gravitational action, at an asymptotic fixed/critical point). Microtubules form a cytoskeletal lattice capable of supporting coherent quantum oscillations, acting like waveguides and could host topologically protected states, carrying bidirectional signals through biophotons (some speculate that these microtubules host special states called Majorana zero modes and act like special objects called Wilczek time crystals) in many frequency ranges which support the speed needed to explain consciousness.


Indeed, when blocked by anesthesia (halothane, isoflurane, desflurane, sevoflurane) and certain injectable agents which bind with high affinity to hydrophobic pockets in the α/β-tubulin dimer - the basic building block of microtubules - without significant action on membrane receptors - consciousness is lost in living organisms, and there are living organisms which seem to show complex signs of consciousness even at the cellular level where neural networks are not even implicated. Recent studies have also demonstrated superradiance in tryptophan molecular structures in biological tissues - a chemical bearing resemblance to the neurotransmitter serotonin - which display macroscopic quantumlike phenomenon. Under these circumstances, it would be worth investing time into understanding nature's models of brilliance before committing to any large scale AI programme, especially as by 2030, the projected U.S. annual spending on AI, across software, services, hardware, and infrastructure - will very likely be on the order of half a trillion to nearly a trillion dollars per year, depending on how fast it grows and what share the U.S. retains of a rapidly expanding global market.


Our relentless drive to build ever-larger AI systems and scale proof-of-work blockchains has blinded us to their fundamental mismatch with the biology of mind and the physics of efficiency. By 2030, we may be spending upward of $500 billion annually, and investing over $100 billion more in power infrastructure alone, to run feed-forward logic and brute-force consensus mechanisms that consume thousands of times more energy than a human brain. Yet despite these vast resources, today’s silicon nets remain “glorified autocorrects,” lacking the bidirectional, oscillatory, and nonlocal dynamics that underlie perception, learning, and consciousness in living systems. If we continue down this path, we risk locking ourselves into a costly technological quagmire - one that wastes enormous resources while most Americans are living paycheck-to-paycheck, to amplify surveillance and central control without ever attaining true understanding or self-awareness. Instead, we should heed the lessons of anesthetic research and quantum-biological models such as Orch-OR, which point toward microtubule-based coherence, fast dipole networks, and entropic gravity as the substrates of conscious information binding. Redirecting even a fraction of our AI budget toward experiments in quantum neurophysics, distributed memory architectures, and re-entrant hardware designs could yield architectures that match the brain’s elegance - and do so on mere watts, not gigawatts.

The choice is clear: continue pouring money into ever-bigger neural-net black boxes, or pioneer a new paradigm grounded in the very physics of life. Our future intelligence - and our energy future - may depend on which path we take next.


Updated: Apr 16

I have been reviewing the president's plans for AI and cryptocurrency, and I have to say I have many concerns. Beyond concerns about job displacement, privacy, and concentrations of power, wealth, and income inequality, the fact that the whole philosophy behind cryptocurrency was to provide a private way to transact value outside of government scrutiny which becomes paradoxical when the government controls or regulates the blockchains on which it depends, the problem of leaving the all-knowing AI black box we are all asked to defer out our trust in (rather than our trust in one another) to be controlled by a few central elites under the guise of "AI safety" which leaves room for information control and manipulations - ultimately, the issues with AI/crypto infrastructure will come down to energy. Ultimately, like any system, the final battle that will signal its demise will be thermodynamic - which if you believe the work of Erik Verlinde, also forms the basis for the most fundamental force of nature - gravity. Will our current AI/crypto strategy collapse under its own weight?


The first thing to consider is that by current projections, by 2030, AI-optimized data centers are projected to consume more than quadruple the current electricity usage in the United States, making it a major driver of rising energy consumption. That sounds bad, but whether you believe in climate change or not is beside the point that we have to begin to ask ourselves if investment in this way is a good use of our resources and will actually proportionally improve the quality of life for the average person in a more substantial way than if the energy was used more directly to benefit society (can you think of better uses for energy than scaling up AI systems?). Having an all-knowing intelligent system or being of sorts to ask questions that seemingly has all of the answers to our problems sounds good though, so we need a deeper analysis.


Furthermore, as there has been increasing investments in cryptocurrencies, if a "Proof-of-Work" (PoW) model were to remain dominant as it is today, then with mass adoption of cryptocurrency to run our transactions in our neoliberal capitalist model, energy consumption could scale up dramatically—potentially increasing from around 100–150 TWh/year (the current scale for Bitcoin) to somewhere in the range of several hundred to over 1,000 TWh/year. This increase would be driven by the need to secure an ever-larger store of value and handle vastly increased transaction volumes. In other words, if a mass-adopted PoW-based cryptocurrency system were to come about by 2030, the U.S. electrical grid might be required to increase its continuous power capacity by approximately 5% to 20%. This translates into roughly an additional 50–200 GW of capacity compared to today’s levels, though the actual numbers depend on numerous uncertain factors including technological, economic, and policy developments.


With these extraordinary increases in resource consumption, we need to begin to ask ourselves if this is really reasonable - if we really believe that this extreme increase in our consumption this way when over 60% of the population is living paycheck to paycheck, and are experiencing so much alienation and anxiety about their futures that they are not even having kids, will really help the people we know in a way that is more efficient than if it were applied more directly. We need to ask ourselves if deferring our trust into an "all-knowing" central entity, especially when it is entangled so closely to the state or corporate interests, whether it is through an organized religion or an AI system, is really superior to our lateral trust in one another and what we together are capable of. If we find that the current AI/crypto strategy is nonsensical, it is our duty to expose these findings and challenge the direction we are taking things.


So what really is the purpose behind development in AI or cryptocurrency systems? Arguably, AGI is the final form of neoliberal capitalism - as an apparatus of surveillance and information control. In order to keep society from devolving into anarchy, any state needs a form of abstracting out information, as organizing populations to collective cooperation becomes intractably difficult beyond the Dunbar limit - the limit on stable interpersonal relationships we humans have as a species to cognitively sustain. In the past, religious systems were effective for this, where people would interact with religious symbols or norms they would defer their trust to, however, as trust in institutions like religion have declined, tech and science have remained the last few safe havens to rely on for statecraft (though less so for science after covid). More troublingly, as history is an indicator, societies often devolve into anarchy about every 80-120 years if you believe the work of Willian Strauss and Neil Howe as institutions become increasingly more complex to maintain - succumbing to entropy - perhaps, inevitably.


There has been a good amount of talk about "AI safety." Surely you have seen figures promoting fears of AI, and that these systems should be controlled or regulated in various ways, and that there is a possibility of a "technological singularity" which would be catastrophic - a point when our tech and AI systems become so intelligent, that they threaten our civilization and humanity as a whole. We should scrutinize this further - who or what are we protecting, and what are we needing to stay safe from? What really happens at this "technological singularity?" What is the difference between AI systems - intelligence - and consciousness? How does cryptocurrency fit into all of this?


Central planners often use Luhmann's systems theory, complex adaptive systems and control theory, quantum chaos theory, and agent-network based strategies to model populations and make decisions, where agents (like you or me) facilitate flows of information between social and economic systems (institutions), and where their socioeconomic status can be modeled by computational complexity classes. There is a requirement for a way to dissociate flows of information between social and economic institutions so that they flow in one direction, but not the other in the form of monetary transactions - and for this, you need central ciphers - encryptions - to enforce the one-way flows of information. Economists have increasingly relied on sociophysics and econophysics to make decisions, while at university they often teach neoclassical economics in the classroom as a more simple introduction.


Trying to quantize value is a paradox like quantum gravity. The spectral theory of value describes value through these information flows. If we as citizens trust one another, and meet our needs through one another, rather than through any central institutions, our whole system would collapse and no longer be needed (one possibility for this "singularity" which represents a sort of socialist uprising) - though as a corollary, the less we as citizens trust one another and are able to cooperate with one another to meet our needs, the more we must depend on central institutions and transactional thinking to accomplish our goals (the other possibility for this "singularity" which represents a more fascistic end). One problem with the latter scenario is that too much reliance on transactional thinking is dual to the lack of ability for people to form stable relationships with one another that would increase fertility rates needed to sustain the economy (which is aging), and too much dependence on immigration may increase cultural anxiety and perceived alienation - a paradox.


The truth is that AI is much different than the way our brains work. Remarkably, our brains only consume about 20 watts of continuous power (the amount of energy needed to run a dim LED light) with an equivalent AI system requiring hundreds of kilowatt-hours to megawatt-hours, or roughly 1,000x more energy. The way the brain stores information defies classical physics (the physics used in our computers), and there is experimental evidence of synchrony between brains of individuals - like a sort of quantum entanglement. The way the brain works defies our best attempts with quantum computing technology to emulate it as well - as there is no realistic classical or quantum physical mechanism we currently know to explain the way information backpropagates and adjusts weights in neural networks. These insights have led researchers like Dr. Penrose and Dr. Hameroff to propose new (controversial but intriguing) theories for the way in which the brain works based on new physics - based on gravity itself. If we are to build a society that increasingly depends on AI systems as feedback control loops, we must reduce the power consumption by further investigation of this new physics, which could save trillions of dollars worth of resources. Based on this analysis, it sure seems like it would be more efficient to just depend on one another than go through any central system of transactions based on encryptions at all, or depend on AI when our own brains are more efficient by several orders of magnitude. However, it is undeniable that when we use tools like ChatGPT or Grok, it sure seems to make things easier.


So is it possible that our best AI systems could collectively overtake us and become more intelligent? No. Our AI systems are a collective linear reflection - a search and synthesis algorithm, which run into scaling limits with the amount of data it consumes, and when trained on its own output, after a few iterations, begins to produce garbled nonsense because there is no interpreter in the loop. We, as citizens, are the interpreters, and the illusion that we are all living under is a belief in anything else. Several studies and experiments in computer science have demonstrated that when a system’s output is repeatedly fed back as its input, the process often tends to “degenerate” into repetitive or meaningless sequences. This behavior has been observed in a variety of contexts, including generative language models, iterative algorithmic processes, and even some cellular automata.


When generative language models (like recurrent neural networks or transformer-based models) are put in a closed feedback loop—that is, when their output is used as new input without humans interpreters in the loop—errors or small deviations from natural language statistics accumulate over time which can make them incomprehensible. The model eventually “locks in” on a limited set of tokens or phrases and produces repetitive or degenerate text. This happens because these models are statistically optimized to predict the most likely following word given the immediate context. Without human corrective feedback, models "hallucinate" or lose their context - much like how books over time are interpreted like living documents with meaning that changes depending on the setting or time period. In the absence of fresh external inputs, the context becomes narrow and self-referential, leading to sequences that lack novelty and coherence.


In many iterative and feedback-controlled systems and in the field of nonlinear dynamical systems, which is used in models by central planners (pioneered in the Soviet Union under the theory of cybernetics), introducing randomness can prevent the system from converging prematurely on a degenerate or “stuck” state (think about how mutations to DNA can keep a species evolving and surviving with their always changing environmental conditions) - these attractors can be dangerous for institutional stability (catastrophe theory, bifurcation theory, and critical point theory all describe these tipping points) which lead to information cascades and rapid collective actions of agents (macroscopic quantumlike behaviors which magnify quantum properties across scales - known as quantum chaos), which could result in anarchy. This noise introduced into social and economic institutions can act as a sort of error-correcting nudge mechanism, keeping the system exploring a broader set of configurations, at least, until all configurations have been exhausted, and complexity content becomes too high for any region of spacetime to carry. The Ryu–Takayanagi formula, which connects quantum entanglement entropy in a boundary quantum field theory to geometric areas in a higher-dimensional gravitational spacetime via the holographic principle (AdS/CFT). This is precisely the kind of relationship that allows one to quantify how much quantum complexity, or information content, a region of spacetime can support.


Analogously, one could speculate that if spacetime is emergent from quantum entanglements of particles, and gravity itself is an entropic or thermodynamic force as suggested by Eric Verlinde, then, over time, the complexity content in any configuration of particles (which are analogized to agents which display inter-brain synchrony and quantum chaotic behavior) introduced itself saturates to an unavoidable globally maximally degenerate state (the UV fixed point in asymptotically safe gravity theory) that converges on a collective gravitational action (the Einstein-Hilbert action) to reset the complexity content and interrelations between particles (this idea is similar to the theory of Dr. Penrose where systems that become maximally entangled reach a point of gravitational collapse). In our analogy, this would be the "technological singularity," or a societal tipping point where stabilizing institutions might become too intractably complex to maintain, which results in a descent into anarchy.


So ultimately, the questions we need to begin to ask is, what systems in our society are worth propping up as they are, and which ones need to be disassembled or reconfigured? Are institutions as they are serving us, the people, the working class, or the current global elite? Whether AI is a tool for good or a tool of control, like almost any revolutionary change in history, will be a matter of perspective. In this analysis, it may be that entropy, as always, is the final and unrelenting champion that will inevitably determine our fate.




My Story

Get to Know Me

I have been on many strange adventures traveling off-grid around the world which has contributed to my understanding of the universe and my dedication towards science advocacy, housing affordability, academic integrity, and education funding. From witnessing Occupy Cal amid 500 million dollar budget cuts to the UC system, to corporate and government corruption and academic gatekeeping, I decided to achieve background independence and live in a trailer "tiny home" I built so that I would be able to pursue my endeavors without distorting influences and economic coercion. My character flaws are not nonperturbatively renormalizable.

Contact
Information

Information Physics Institute

University of Portsmouth, UK

PO Box 7299

Bellevue, WA 98008-1299

1 720-322-4143

  • LinkedIn
  • Twitter

Thanks for submitting!

©2025 by Trevor Nestor 

bottom of page