it:ai_future
Table of Contents
AI and the future
see also:
Introduction
- AI is a disruptive technology which will change the world and the way we do things
- like most things, there will be negative outcomes or risks as well as positive outcomes or benefits and there will be many who exploit it and abuse it
- the highest quality LLM AI will need to be run in the cloud for most cases until more efficient computer chips are designed to optimise AI
- laptops are reaching their maximum capabilities due to the limits on heat dissipation (fan size and amount of copper heat sink) and even this level is no where near adequate to train large AI models or even to fine tune them to your needs - hence there may be a return to desktop computing for those not wanting to pay ongoing subscriptions to online AI services
- nevertheless, small AI models without the full functionality of the latest LLMs will be designed to run on portable devices
- “over the long term, the future is decided by optimists”
- “positive thinking is a fraud as it is put forward when things are crap, but possibility is a language of creation” Benjamin Zander
- alignment with needs of humanity is a major issue with conflicting alignment likely to arise from governments with disparate ideologies and geopolitical aspirations, corporate control and profit goals, bad actors, and the default AI training is probably not aligned with the needs of humanity and needs to be further actively fine tuned
- AI will ALWAYS have bias as there is no uniform perception of the world or how it should be - tune it in any direction and there will be people who object
- humans will increasingly “live” in a digital world thanks to increasing use of augmented reality and virtual reality
Forecasting the future is difficult
- continued exponential growth in AI and robotics does need to overcome a range of barriers to continue at pace
- far more efficient computer hardware will be needed
- new software and physical embodiment or simulation approaches will be required to allow AI to gain a full world view and develop AGI level capabilities and beyond - this will not be possible with just training on language data alone as it has such low band width (a 4yr old child has already taken in over 100x the data that a large language model has been trained on thanks to the child's visual inputs of the world - humans and other animals do not need language to become intelligent)
- current large language models can give the perception they are incredibly smart but they are still just textual probability machines and their outputs are not based upon pre-planning and reasoning and a deep understanding of cause and effect - but only on their learned probability curves of what is the most likely next token, and the errors with determining this increase as the context window increases or the prompt data is not included in their training data.
- it is relatively easy to see negative dystopian futures, doom scenarios and how they may come about such as:
- pandemics
- climate change with greater extremes in weather, rising sea levels causing loss of habitable land, and resource shortages
- loss of trust in science
- not being able to trust anything you see or hear as of 2023
- in the 19thC, Edgar Allan Poe warned “Believe nothing you hear, and only one half that you see.”, in the 21stC you cannot trust what you see in the digital world either!
- socio-political unrest and wars
- terrorism
- the species population curve - peak then decline as resources become increasingly exhausted or pathogens take their toll in an over-crowded world
- techno-feudalism
- the AI world is likely to be controlled by super corporations and the rest of the world will be enslaved by being committed to having to pay rent for AI services which will become a requirement for living in the social world
- AI related risks such as:
- cybersecurity
- this will be partly mitigated as we will all probably also have AI agents which help us fight scams etc
- it would be a major issue if AI is able to circumvent our current encryption technologies
- loss of jobs requiring flexible thinking to re-invent careers - this may impact the vulnerable who lack capacity to change
- further reduction in office / factory work resulting in more working from home which is likely to lead to loss of physical connections and loneliness
- increase economic divide
- consumer goods and services will increasingly become a subscription model requiring ongoing payments for most things
- this is likely to adversely impact the lower socio-economic classes who do not have the asset base to pay for this
- the ageing population will also be an increasing pressure for governments to reduce the burden of pensions, healthcare, etc
- mental health impacts
- manipulation of people by bad actors
- AI individualised targeted advertising and information
- Epicurus in the 3rdC BC pointed out that advertising is all about inducing unnatural and unnecessary desires in people, and playing on our fear of death and as such drives us to buy what we don't truly need and thus is a root cause of our problems
- 21stC social media has already taken this to a much higher level of social and individual manipulation and control - AI will likely take this much further, particularly allowing bad actors to brain wash people with extremist views or erroneous information while algorithms tend to provide self-reinforcing content to individuals which further cements narrow viewpoints
- loss of control of AI
- unlikely as guard rails will be put in place
- war against humanity
- this is extremely unlikely in the near future as:
- this generally requires a deep rooted desire for dominance which exists in many species but probably not in future AI as guard rails will be put in place
- it can be much harder to visualise optimistic protopian future possibilities
- this requires:
- optimism and some trust in the ongoing long term growth and prosperity of humanity
- Covid-19 showed us that when faced with a true urgency we can confront it with heroics - the projected time scale for development of a vaccine was 4 years - it was achieved within 1 year
- an understanding of history
- trans-generational empathy and consideration of how we can create a better future, am I going to be seen as a great ancestor
- broad knowledge across many fields
- curiosity to discover as much as possible
- imagination to think outside the square
- how could new technologies actually be used in ways we have never dreamed of?
- IBM failed to make the most of their personal computer invention as their projections in the early 1980's were that sales would peak at some 25,000 in a couple of years then dwindle over the next couple of years once the novelty wore off as they couldn't imagine how people would actually use it
- it is hard to create a new world if one cannot imagine it
- future thinking
- collaboration with others
- helps broaden the mind of other perspectives
- collective action eg. plant trees for future generations
- scenario planning to analyse potential possibilities
- examples include:
- AI is likely to make us all more empowered, intelligent and capable by having access to smart AI agents in a similar manner as a manager runs his team of trained staff
- AI is likely to make many tasks more efficient
- amazing advances in science and medicine
- genetic engineering
- AI powered research to rapidly discover what we don't yet know
- genomic analysis and transcriptomes
- microbiomics
- metabolomes
- episomes
- new pharmaceuticals and better vaccines
- massive expansion of materials science
- discoveries to cure cancer, atherosclerosis, hypertension, diabetes and improve aging for longer healthier lives
- genome and episome informed personalized treatments including vaccines for precision prevention and treatments
- 3D bioprinting to accurately replicate the biological environment for more personalized optimized care an individual (eg. gastric cancer) 1)
- nanobots
- highly specific targeting of drug delivery
- monitoring of physiologic status
- new communication and control technologies
- ultrasonic modification of neural activity
- brain-machine interfaces such as neural implants
- AI robots to:
- reduce need for dangerous jobs
- create productivity increases
- perform more precise and repetitive tasks more reliably and more cheaply
- provide assistance - cognitive and physical
- robotic provision of personal care to the aged or disabled
- autonomous domestic robots are still a few years off yet
- new transport options as self-driving vehicles become reliable
- utilisation of a “digital self” which looks like, talks like us and can make similar decisions as we would make
- already Alibaba has created tech that takes an image and a voice track to create a very realistic video representation
- voice cloning is already very good and easily available
- integrating a LLM would be easy
- it now only needs a lot of our personal memory, thoughts, preferences, opinions, etc to provide a digital self capacity
- this could be used to continue interactions with our loved ones after they have died
- at some point, this could then be embedded into a humanoid robot
- re-valuing human specific characteristics not able to be replicated adequately by AI
- creativity, ethics, morality, philosophical aspects based upon the human experience
- physical human social connections and interactions
- physical human presence to validate trust and authenticity
Towards immortality
- Longevity Escape Velocity
- some say that currently, science is adding 3 months to your lifespan EACH year
- Longevity Escape Velocity will occur when it is able to add at least 1 year to your life span every year you remain alive
- some believe those in reasonably good shape and with reasonable means will reach this by 2030 (eg. Ray Kurzweil)
- understanding the cellular and organ systems which control regeneration and cell death
- this will be key to substantially extending longevity by allowing regeneration, curing cancers, reducing aging, etc.
Towards Super-intelligent AI (ASI)
- many believe we are now in a similar race as that of World War II and the race to create the atom bomb - the 1st super weapon to take power over the world - ASI could well be the next generation of a global super-weapon, and the 1st state to gain ASI may have a globally dominant position
- it is generally considered that we will reach an AI level of “general human intelligence” or AGI by around 2027 thanks to further upscaling of compute, data for training, algorithms and removal of AI model blocks
- it is thought by many that once AGI is achieved, it could be used to create millions of “agents” which are performing traditional AI engineering tasks and R&D, and that this could dramatically speed up progress to development of ASI
- this will require much larger data clusters than we currently have and these will in turn require massive amounts of GPU chips, electrical power inputs and cooling systems
- it is also likely that new more energy efficient neural learning mechanisms (eg. continuous learning “liquid / neuroplastic” neural networks, etc) and AI optimised chips (eg. analog chips, neuromorphic chips, quantum computers, etc) will be developed to help drive this
- there is a risk that if ASI is created by a less than benevolent organisation or state, that it could be used as a super-weapon to take control of the world, but to achieve this, they would probably need to:
- steal the AGI models and algorithms from the US AI labs (that is probably fairly easy for large states with a past history of doing so)
- create their own GPU chips given that the US has banned export of these (although the far majority are made in Taiwan for the US)
- China may be able to do this albeit restricted to 7nm chip technologies with their higher power costs
- nVidia chip technologies have been massively improving power cost per FLOP - a 1000x improvement over the past decade while software improvements also have added to power cost improvements
- create their own massive data clusters with sufficient electricity and cooling
- in 2024, the majority of AI clusters are being deployed and planned in the US, necessitating a tripling of datacenter critical IT capacity from 2023 to 2027.
- currently China has by far the greatest amount of electricity production of any country and with the fastest growth in electricity production
- in 2022, China installed more solar panels than the entire historical installation in the United States
- China's 14th Five-Year Plan (FYP) targets a 50% increase in renewable energy generation from 2.2 trillion kWh in 2020 to 3.3 trillion kWh in 2025.
- China is increasing its nuclear power capacity as part of its national economic blueprint, which will help meet the growing power demand from AI datacenters
- Guangdong province plans to increase its computing power to exceed 40 exaflops by next year and 60 exaflops by 2027, contributing to China's goal of reaching 300 exaflops by 2025. This initiative includes establishing a mature AI industrial chain, from chip development to computational infrastructure and AI-driven applications.
- the US:
- a growing number of datacenters in the US are being built with their own nuclear reactors, offering a potentially huge source of clean power
- As of 2024, AI datacenters in the US are expected to see a significant increase in power demand. By 2030, datacenters will use 8% of US power, up from 3% in 2022 and the US is expected to invest around $50 billion in new power generation to meet this increasing demand from AI datacenters.
- the US faces bottlenecks in transmission lines and transformers, which could stall datacenter buildouts in key locations like Northern Virginia, Arizona, and Santa Clara, California
- to achieve super-military advantage, they would probably need to also have military robot manufacturing facilities built by robots
- China already has a massive manufacturing sector and is making head-ways into mass robot manufacture although current capabilities are generally limited
- US has a advanced robotics industry but perhaps may not have the same manufacturing capacity as China - but they will have access to better GPU chips
- in June 2024, recently fired OpenAI superalignment employee Leopold Aschenbrenner who allegedly declined to sign a non-disclosure agreement worth $US1m in OpenAI shares on his resignation, publishes Situational Awareness - a treatise on the future of AI in the next decade
- “Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them.”
- “The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll (sic. US) be in an all-out race with the CCP (sic. China); if we’re unlucky, an all-out war.”
- “AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ orders of magnitude) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic. ”
- “Before we know it, we would have superintelligence on our hands—AI systems vastly smarter than humans, capable of novel, creative, complicated behavior we couldn’t even begin to understand—perhaps even a small civilization of billions of them. Their power would be vast, too. Applying superintelligence to R&D in other fields, explosive progress would broaden from just ML research; soon they’d solve robotics, make dramatic leaps across other fields of science and technology within years, and an industrial explosion would follow. Superintelligence would likely provide a decisive military advantage, and unfold untold powers of destruction. We will be faced with one of the most intense and volatile moments of human history.”
- “Whoever controls superintelligence will quite possibly have enough power to seize control from pre-superintelligence forces. (sic. and overthrow govts)”
- “There is a real possibility that we will lose control, as we are forced to hand off trust to AI systems during this rapid transition.”
- “More generally, everything will just start happening incredibly fast. And the world will start going insane. ”
- “We’re developing the most powerful weapon mankind has ever created. The algorithmic secrets we are developing, right now, are literally the nation’s most important national defense secrets—the secrets that will be at the foundation of the US and her allies’ economic and military predominance by the end of the decade, the secrets that will determine whether we have the requisite lead to get AI safety right, the secrets that will determine the outcome of WWIII, the secrets that will determine the future of the free world. And yet AI lab security is probably worse than a random defense contractor making bolts. ”
- “Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And while it is a solvable problem, things could very easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic.”
- HOWEVER, there are significant issues that need to be addressed before we can get to ASI - in particular, a vastly increased power supply and access to massive amounts of better training data than the mostly rubbish data on the internet that is currently used.
it/ai_future.txt · Last modified: 2024/06/17 04:59 by gary1