- Trevor Alexander Nestor
- 2 days ago
- 9 min read
Updated: 17 hours ago
I have been reviewing the president's plans for AI and cryptocurrency, and I have to say I have many concerns. Beyond concerns about job displacement, privacy, and concentrations of power, wealth, and income inequality, the fact that the whole philosophy behind cryptocurrency was to provide a private way to transact value outside of government scrutiny which becomes paradoxical when the government controls or regulates the blockchains on which it depends, the problem of leaving the all-knowing AI black box we are all asked to defer out our trust in (rather than our trust in one another) to be controlled by a few central elites under the guise of "AI safety" which leaves room for information control and manipulations - ultimately, the issues with AI/crypto infrastructure will come down to energy. Ultimately, like any system, the final battle that will signal its demise will be thermodynamic - which if you believe the work of Erik Verlinde, also forms the basis for the most fundamental force of nature - gravity. Will our current AI/crypto strategy collapse under its own weight?
The first thing to consider is that by current projections, by 2030, AI-optimized data centers are projected to consume more than quadruple the current electricity usage in the United States, making it a major driver of rising energy consumption. That sounds bad, but whether you believe in climate change or not is beside the point that we have to begin to ask ourselves if investment in this way is a good use of our resources and will actually proportionally improve the quality of life for the average person in a more substantial way than if the energy was used more directly to benefit society (can you think of better uses for energy than scaling up AI systems?). Having an all-knowing intelligent system or being of sorts to ask questions that seemingly has all of the answers to our problems sounds good though, so we need a deeper analysis.
Furthermore, as there has been increasing investments in cryptocurrencies, if a "Proof-of-Work" (PoW) model were to remain dominant as it is today, then with mass adoption of cryptocurrency to run our transactions in our neoliberal capitalist model, energy consumption could scale up dramatically—potentially increasing from around 100–150 TWh/year (the current scale for Bitcoin) to somewhere in the range of several hundred to over 1,000 TWh/year. This increase would be driven by the need to secure an ever-larger store of value and handle vastly increased transaction volumes. In other words, if a mass-adopted PoW-based cryptocurrency system were to come about by 2030, the U.S. electrical grid might be required to increase its continuous power capacity by approximately 5% to 20%. This translates into roughly an additional 50–200 GW of capacity compared to today’s levels, though the actual numbers depend on numerous uncertain factors including technological, economic, and policy developments.
With these extraordinary increases in resource consumption, we need to begin to ask ourselves if this is really reasonable - if we really believe that this extreme increase in our consumption this way when over 60% of the population is living paycheck to paycheck, and are experiencing so much alienation and anxiety about their futures that they are not even having kids, will really help the people we know in a way that is more efficient than if it were applied more directly. We need to ask ourselves if deferring our trust into an "all-knowing" central entity, especially when it is entangled so closely to the state or corporate interests, whether it is through an organized religion or an AI system, is really superior to our lateral trust in one another and what we together are capable of. If we find that the current AI/crypto strategy is nonsensical, it is our duty to expose these findings and challenge the direction we are taking things.
So what really is the purpose behind development in AI or cryptocurrency systems? Arguably, AGI is the final form of neoliberal capitalism - as an apparatus of surveillance and information control. In order to keep society from devolving into anarchy, any state needs a form of abstracting out information, as organizing populations to collective cooperation becomes intractably difficult beyond the Dunbar limit - the limit on stable interpersonal relationships we humans have as a species to cognitively sustain. In the past, religious systems were effective for this, where people would interact with religious symbols or norms they would defer their trust to, however, as trust in institutions like religion have declined, tech and science have remained the last few safe havens to rely on for statecraft (though less so for science after covid). More troublingly, as history is an indicator, societies often devolve into anarchy about every 80-120 years if you believe the work of Willian Strauss and Neil Howe as institutions become increasingly more complex to maintain - succumbing to entropy - perhaps, inevitably.
There has been a good amount of talk about "AI safety." Surely you have seen figures promoting fears of AI, and that these systems should be controlled or regulated in various ways, and that there is a possibility of a "technological singularity" which would be catastrophic - a point when our tech and AI systems become so intelligent, that they threaten our civilization and humanity as a whole. We should scrutinize this further - who or what are we protecting, and what are we needing to stay safe from? What really happens at this "technological singularity?" What is the difference between AI systems - intelligence - and consciousness? How does cryptocurrency fit into all of this?
Central planners often use Luhmann's systems theory, complex adaptive systems and control theory, quantum chaos theory, and agent-network based strategies to model populations and make decisions, where agents (like you or me) facilitate flows of information between social and economic systems (institutions), and where their socioeconomic status can be modeled by computational complexity classes. There is a requirement for a way to dissociate flows of information between social and economic institutions so that they flow in one direction, but not the other in the form of monetary transactions - and for this, you need central ciphers - encryptions - to enforce the one-way flows of information. Economists have increasingly relied on sociophysics and econophysics to make decisions, while at university they often teach neoclassical economics in the classroom as a more simple introduction.
Trying to quantize value is a paradox like quantum gravity. The spectral theory of value describes value through these information flows. If we as citizens trust one another, and meet our needs through one another, rather than through any central institutions, our whole system would collapse and no longer be needed (one possibility for this "singularity" which represents a sort of socialist uprising) - though as a corollary, the less we as citizens trust one another and are able to cooperate with one another to meet our needs, the more we must depend on central institutions and transactional thinking to accomplish our goals (the other possibility for this "singularity" which represents a more fascistic end). One problem with the latter scenario is that too much reliance on transactional thinking is dual to the lack of ability for people to form stable relationships with one another that would increase fertility rates needed to sustain the economy (which is aging), and too much dependence on immigration may increase cultural anxiety and perceived alienation - a paradox.
The truth is that AI is much different than the way our brains work. Remarkably, our brains only consume about 20 watts of continuous power (the amount of energy needed to run a dim LED light) with an equivalent AI system requiring hundreds of kilowatt-hours to megawatt-hours, or roughly 1,000x more energy. The way the brain stores information defies classical physics (the physics used in our computers), and there is experimental evidence of synchrony between brains of individuals - like a sort of quantum entanglement. The way the brain works defies our best attempts with quantum computing technology to emulate it as well - as there is no realistic classical or quantum physical mechanism we currently know to explain the way information backpropagates and adjusts weights in neural networks. These insights have led researchers like Dr. Penrose and Dr. Hameroff to propose new (controversial but intriguing) theories for the way in which the brain works based on new physics - based on gravity itself. If we are to build a society that increasingly depends on AI systems as feedback control loops, we must reduce the power consumption by further investigation of this new physics, which could save trillions of dollars worth of resources. Based on this analysis, it sure seems like it would be more efficient to just depend on one another than go through any central system of transactions based on encryptions at all, or depend on AI when our own brains are more efficient by several orders of magnitude. However, it is undeniable that when we use tools like ChatGPT or Grok, it sure seems to make things easier.
So is it possible that our best AI systems could collectively overtake us and become more intelligent? No. Our AI systems are a collective linear reflection - a search and synthesis algorithm, which run into scaling limits with the amount of data it consumes, and when trained on its own output, after a few iterations, begins to produce garbled nonsense because there is no interpreter in the loop. We, as citizens, are the interpreters, and the illusion that we are all living under is a belief in anything else. Several studies and experiments in computer science have demonstrated that when a system’s output is repeatedly fed back as its input, the process often tends to “degenerate” into repetitive or meaningless sequences. This behavior has been observed in a variety of contexts, including generative language models, iterative algorithmic processes, and even some cellular automata.
When generative language models (like recurrent neural networks or transformer-based models) are put in a closed feedback loop—that is, when their output is used as new input without humans interpreters in the loop—errors or small deviations from natural language statistics accumulate over time which can make them incomprehensible. The model eventually “locks in” on a limited set of tokens or phrases and produces repetitive or degenerate text. This happens because these models are statistically optimized to predict the most likely following word given the immediate context. Without human corrective feedback, models "hallucinate" or lose their context - much like how books over time are interpreted like living documents with meaning that changes depending on the setting or time period. In the absence of fresh external inputs, the context becomes narrow and self-referential, leading to sequences that lack novelty and coherence.
In many iterative and feedback-controlled systems and in the field of nonlinear dynamical systems, which is used in models by central planners (pioneered in the Soviet Union under the theory of cybernetics), introducing randomness can prevent the system from converging prematurely on a degenerate or “stuck” state (think about how mutations to DNA can keep a species evolving and surviving with their always changing environmental conditions) - these attractors can be dangerous for institutional stability (catastrophe theory, bifurcation theory, and critical point theory all describe these tipping points) which lead to information cascades and rapid collective actions of agents (macroscopic quantumlike behaviors which magnify quantum properties across scales - known as quantum chaos), which could result in anarchy. This noise introduced into social and economic institutions can act as a sort of error-correcting nudge mechanism, keeping the system exploring a broader set of configurations, at least, until all configurations have been exhausted, and complexity content becomes too high for any region of spacetime to carry. The Ryu–Takayanagi formula, which connects quantum entanglement entropy in a boundary quantum field theory to geometric areas in a higher-dimensional gravitational spacetime via the holographic principle (AdS/CFT). This is precisely the kind of relationship that allows one to quantify how much quantum complexity, or information content, a region of spacetime can support.
Analogously, one could speculate that if spacetime is emergent from quantum entanglements of particles, and gravity itself is an entropic or thermodynamic force as suggested by Eric Verlinde, then, over time, the complexity content in any configuration of particles (which are analogized to agents which display inter-brain synchrony and quantum chaotic behavior) introduced itself saturates to an unavoidable globally maximally degenerate state (the UV fixed point in asymptotically safe gravity theory) that converges on a collective gravitational action (the Einstein-Hilbert action) to reset the complexity content and interrelations between particles (this idea is similar to the theory of Dr. Penrose where systems that become maximally entangled reach a point of gravitational collapse). In our analogy, this would be the "technological singularity," or a societal tipping point where stabilizing institutions might become too intractably complex to maintain, which results in a descent into anarchy.
So ultimately, the questions we need to begin to ask is, what systems in our society are worth propping up as they are, and which ones need to be disassembled or reconfigured? Are institutions as they are serving us, the people, the working class, or the current global elite? Whether AI is a tool for good or a tool of control, like almost any revolutionary change in history, will be a matter of perspective. In this analysis, it may be that entropy, as always, is the final and unrelenting champion that will inevitably determine our fate.