User Tools

Site Tools


it:ai

artificial intelligence

Introduction

  • AI, simply put, is simulated intelligence demonstrated by the software with the ability to depict or mimic human brain functions, and operative on “mimic,” number one
  • AI is great at things like analysis, pattern recognition, image analysis, information processing.
  • AI algorithms can have bias which means there's a statistical variance when something is being predicted between one group of patients or people and another resulting in predictive capabilities that are different for one population versus another - often the AI algorithms are “learned” using cohorts of patients such as white males and thus may not be accurate at predicting other subgroups of patients or uncommon conditions

AI architectures

Statistical probabilistic association AI

  • this is the main architecture in use today by ChatGPT, etc
  • a neural network mathematical model is trained on enormous amounts of data and the system develops and stores billions of values for parameters
  • it is great for creating Large Language Models (LLM) which can be generalisable as well as pattern recognition, object detection and image generation
  • however, it does not have a logic base of cause and effect, nor does it understand physical reality
  • when it is asked a question for which there was no adequate training data, the statistical predictive algorithms may need to accept a low probability prediction word which then becomes a self-perpetuating incorrect response which has been called “hallucination” although this is a poor choice of word - it is just that it is gone down the wrong rabbit hole - in much the same way that clinicians may erroneously follow a clinical care pathway and fail to realise that it is the wrong pathway.

Cognitive modelling AI

  • a variety of cognitive AI attempt to model the human brain components such as short term, long term memory, logic, etc
  • currently are not generalisable and are used for specific use cases
  • have better reasoning, symbolic processing, planning, learning, and memory management while being more consistent with less “hallucinations” but have less world knowledge, less knowledge scalability and are not as good at language processing
  • biologic modeling to predict neural activity and cognitive behaviour: LEABRA, SPAUN
  • psychological modeling, predicts reaction time and cognitive errors: ACT-R, EPIC, CLARION, LIDA, CHREST, 4CAPS
  • AI functionality inspired by psychological and biological modeling to emphasize more complex cognitive processing: SOAR, Companions, Sigma, ICARUS, CogPrime
  • in 2017, a “Standard Model of the Mind” was proposed to help unify human-like cognitive AI based upon concepts within cognitive AI architectures such as ACT-R, Soar, and Sigma 1)
    • Behavior is driven by sequential action selection via a cognitive cycle that runs at ~50 ms per cycle in human cognition
    • Working memory provides a temporary global space within which symbol structures can be dynamically composed from the outputs of perception and long-term memories.
    • Procedural memory contains knowledge about actions, whether internal or external. This includes both how to select actions and how to cue (for external actions) or execute (for internal actions) them, yielding what can be characterized as skills and procedures.
    • Declarative memory is a long-term store for facts and concepts. It is structured as a persistent graph of symbolic relations, with metadata reflecting attributes such as recency and frequency of (co-)occurrence that are used in learning and retrieval.
    • Learning involves the automatic creation of new symbol structures, plus the tuning of metadata, in long-term – procedural and declarative – memories. It also involves adaptation of non-symbolic content in the perception and motor systems.
    • Perception converts external signals into symbols and relations, with associated metadata, and places the results in specific buffers within working memory. There can be many different perception modules, each with input from a different modality – vision, audition, etc. – and each with its own perceptual buffer.
    • Motor converts the symbol structures and their metadata that have been stored in their buffers into external action through control of whatever effectors are a part of the body of the system.

Approaches to integrating LLMs with cognitive architectures

modular approach

  • iterative LLM approach:
    • “The cognitive AI uses symbolic structures as inputs and executes one or several cognitive cycles, after which, the contents of the working memory, including fired productions, relevant information from declarative memories, and actions, are injected as cues into the next intermediate step of the LLM.

agency approach

  • micro-agents operate within the framework
    • these act a micro-level, with each agent operating through either a fine-tuned LLM or a symbolic processor
    • sensory inputs are processed by the perception module, yielding abstract entities like objects, categories, actions, events, etc. forwarded to the working memory
    • working memory cues declarative memories to establish local associations, e.g., user navigation preferences, place familiarity, and more
    • specialized agents at the agency observe working memory contents and form coalitions
    • the coalitions are transferred to the Global Workspace, where a competitive process selects the most relevant coalition
  • macro-agents interact with humans etc

neuro-symbolic approach

  • user request
  • perception: LLM - symbols or natural language
  • working memory
  • action-centered sub-system (ACS)
    • utilising algorithms (if statements)
    • operates across two distinct levels:
      • the top level (symbolic), responsible for encoding explicit knowledge
        • Fixed Rules (FR) are rules that have been hard-wired by an expert and cannot be deleted
        • Independent-Rule-Learning (IRL) rules are independently generated at the top level and can be refined or deleted as needed (ie. Top down learning)
        • Rule-Extraction-Refinement (RER) rules which are extracted from the bottom level (ie. Bottom-Up learning but requires feedback rewards/reinforcement)
      • the bottom level (connectionist), tasked with encoding implicit knowledge and may incorporate LLMs
    • these synergistically engage in action selection, reasoning, and learning processes
  • “motor” output or response

software for creating artificial intelligence tools

    • Python-based end-to-end open source platform for machine learning created by Google
    • Python-based tool which exists to develop open-source software, open-standards, and services for interactive computing across dozens of programming languages
    • can leverage big data tools, such as Apache Spark, from Python, R and Scala. Explore that same data with pandas, scikit-learn, ggplot2, TensorFlow
    • a unified analytics engine for large-scale data processing
    • can use it interactively from the Scala, Python, R, and SQL shells
  • PyTorch - CUDA optimised Python modules for deep learning and tensor manipulation
  • Markdown
  • Shiny
  • Tidyr
  • knitr
  • ggplot2
  • pre-trained large language models (LLMs)

Hardware for creating AI

  • this is mainly the Graphics Processor Unit (GPU) of a computer which is far more efficient at processing complex calculations of large matrices than the traditional CPU
    • basic training of a simple AI model with images is likely to need as a minimum a nVidia RT GPU with at least 8-16Gb VRAM and for laptops this means the high end gaming laptops
  • large language models use tens of thousands of these GPUs at the same time requiring megawatts of electricity to perform their calculations on matrices of billions of parameters
  • these computers will soon consume more electricity than small cities!
  • the cost of these computers in 2023 is in the millions of dollars and this will only grow as demands for improved AI grow and thus it is likely that the future of AI will be dominated by the super rich who will only increase the economic divide - the far majority of profits from AI in the future is likely to flow to the US and China leaving most other countries to just rent access to the AI
    • in late 2023, a range of US companies have already bought 10,000+ nVidia H100s
    • in 2024 it is likely that groups will spend over $US1billion to develop a single large-scale model
  • nVidia has created a dominating role by developing it's proprietary CUDA technology
    • in 2023: US has banned export of their high end GPUs (H100's and A100's) to China
  • nVidia's GPU chips are made in Taiwan by TSMC using ASML's EUV (Extreme Ultraviolet) photolithography machines
    • ASML is a Dutch company which can make 40-60 EUVs per year with each EUV costing $200-400m and weighing ~ 200 tons
    • Intel and AMD CPU chips are also made using these EUVs
  • in 2023, a company has demonstrated reasonably accurate image generation of human thoughts when a human looks at an image suggesting that bionic hardware development is in its early phases
  • in 2024 we will see proprietary inference focused AI chips
    • Oct 2023, IBM created an analog computer chip to be more efficient at AI calculations
    • Oct 2023, Chinese make 1st memristor AI chip “75x” more efficient for AI and “mimics the energy efficient approach of the brain” hence developed “Computing-in-Memory” (CIM) with close integration of computing and memory on the one chip - memristor remembers how much current passed though it by changing its resistance according to the prior current so it can store information without needing ongoing power and power consumption is thus only “3%” of current computer chips

AI clinical use cases

  • Study aide
    • generate MCQs from provided document source material
    • organise and summarise study notes
    • transcribe speech into text notes using smartphone record, upload to Dropbox/GoogleDrive/OneDrive then use OpenAI api and Whisper to convert to text then store in Notion, all using Pipedream to process the steps - OpenAI will charge about $1 per 2hrs of audio
    • create catchy music with your study lyrics such as this one I created to help clinicians remember the COACHED algorithm for CPR: COACHED algorithm song
  • AI assistants
    • you can search the model's knowledge base for answers just by asking the question
    • you can upload a large document to the AI and then ask questions and it will find the answers from the document and save you manually searching for them
    • customised ChatGTP versions using carefully constructed underlying contextual prompt information combined with uploaded documents can form a basis for a series of specifically tasked AI “agents”
    • potentially if the AI has access to patient data it could perform many clinical assistant functions
    • software developers can also use these to write code
  • Private GPTs
    • allows querying of YOUR hospital's uploaded docs / files via a LLM privately without access to an internet connection
  • LLM creation of structured databases from unstructured data
  • automated workflows
    • these could be achieved with customised ChatGTP AI versions combined with automated no-code actions such as Zaps such as Slack notifications
  • more functional and powerful devices
    • “semantic hearing” aids - AI could allow users to block only certain types of sounds - already coming in 2023 in the form of noise cancelling headphones but could potentially be utilised in next generation hearing aids
  • measuring vital signs
  • image diagnosis (pattern recognition through machine learning)
    • radiologic images
      • Dx endometriosis using US and MRI images 2)
      • Sybil: an accurate future prediction of lung cancer from one CT scan 3)
      • coronary angiography 4)
      • CXR - AI estimated “biologic age” - trained on over 100,000 CXRs 5)
      • fMRI images - AI can detect depression better than doctors can
    • dermatology
    • retinal images:
      • AI “perfectly” diagnoses autism spectrum disorder (ASD) status from high-resolution retinal images 6)
    • ECGs:
      • June 2023: New AI Tool Beats Standard Approaches for Detecting Heart Attacks including HEART score 7)
  • metabolomic data analysis
    • studies of vast numbers of metabolites in the blood to ascertain risk for or presence of disease
    • Interpretable Machine Learning on Metabolomics Data Reveals Biomarkers for Parkinson’s Disease 8)
    • markers for early detection of cancers - eg. the company Grail is using AI in its search for multi-cancer early detection
  • genomic and protein analysis
    • AI can model protein folding (once only analysed by Xray crystallography) and can thus better predict actions, develop pharmaceuticals and vaccines - eg. Google AlphaFold
    • cancer genomics for better targetted chemotherapy
    • Oct 2023: AI identifies proteins as possible targets for gonococcal vaccines 9)
    • analysis of genomes such as CSIRO's VariantSpark can discover relationships between genes which result in phenotypic diseases such as Alzheimer's
  • natural language processing
  • predictive modelling
  • educational videos:
  • AI run self-care virtual medicine pods
  • materials science
    • in 2023, AI has already predicted some 22 million new molecular structures which the material science world will now investigate
    • this holds many potential developments including:
      • development of a better E-skins are flexible and comfortable and thus can be placed on various robotic and human body locations to record biosignals continuously and non-invasively 11)
  • Australian EHR use cases:
  • Medical artificial general intelligence(AGI)
    • in 2023, UAE-based company, G42, is planning on using Cerebras' new 7nm 300mm wafer chip 1.5MW, 4exaflop 850,000 core WSE-2 supercomputer (each can process some 4 billion parameters and is some 200x faster than a nVidia A100 GPU system) to develop medical AGI using their vast medical databases
  • Development of new diagnostics and therapeutics and preventive medicine
    • AI is likely to allow dramatic breakthroughs to:
      • speed and costs of research
      • point of care testing and wearable biologic sensors
      • prevention and treatment of cancers eg. vaccines
      • slow down and perhaps reverse biologic aging - already achieved in mice experiments
      • improve vaccines, antivirals and antibiotics and more rapid solutions to emerging infections
      • reduce genetic diseases
      • reduce atherosclerosis and hypertension through better understanding and better targeting of therapeutics
      • reduce addiction and mental illness eg. via transcranial ultrasound techniques
      • patient genome-specific, microbiome-specific, episome-specific diagnosis and therapeutics
      • AI robotic home care
      • etc.
  • AI will potentially identify and expose fraudulent or erroneous research papers

AI limitations

  • LLM AI is extremely unlikely to be able to:
    • outperform doctors at all tasks
    • diagnose the undiagnosable
    • treat the untreatable
    • see the unseeable on scans or images (although they may be trained to see discriminating patterns which humans cannot see)
    • predict the unpredictable
    • classify the unclassifiable
    • eliminate workflow inefficiencies
    • eliminate hospital admissions and readmissions
    • ensure 100% medication compliance
    • ensure zero patient harm
    • become truly intelligent (it is likely to only simulate intelligence)
    • become truly conscious (it is likely to only simulate consciousness but even this may create enormous issues for how humans manage them)
  • instead, LLM AI can be an important tool which can help clinicians in various ways

Large Language Models (LLM)

ChatGPT

  • developed by OpenAI
  • a natural language model that can “answer” naturally expressed questions with a well written response derived from data available on the internet - which may or may not be the truth although the OpenAI programmers are constantly analysing how the engine performs and restricting its responses
  • natural language models are currently primarily designed to make accurate predictions about words and sentences without actually understanding their meanings or the logic of cause and effect
  • thus these still lack a general sense of rationality, and as such the models should be treated with caution, especially in applications requiring high-stakes decision-making
  • GPT-3, the engine that powered the initial release of ChatGPT, learns about language by noting, from a trillion instances, which words tend to follow which other words. The strong statistical regularities in language sequences allow GPT-3 to learn a lot about language. And that sequential knowledge often allows ChatGPT to produce reasonable sentences, essays, poems and computer code but it does not understand the meaning of the words. People understand how to make use of objects in ways that are not captured in language-use statistics and thus these uses are not accessible by GPT-3. GPT-4 was also trained on images as well as text and does better but is still lacking in this aspect.
  • in Dec 2022 the CEO stated “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness” “it's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”
  • in late 2022 it was made freely accessible to anyone for limited access while it is in training mode
    • the company retains API data for 30 days to identify any “abuse and misuse”
    • regular users of ChatGPT can continue to expect that OpenAI will use their data to train its AI
  • in Feb 2023, OpenAI released a commercialised API service gpt-3.5-turbo which appears to cost $US78,000 ($A108,280) for a three-month commitment, or a $US264,000 ($A366,485) 1-year subscription. In this instance, the data will “no longer” be used to train the large language model, unless they opt in.
  • the technology behind ChatGPT remains a “black box” for those outside the company
  • in March 2023:
    • OpenAI released GPT4 model
    • a wide group of more than 500 AI experts signed an open letter demanding AI labs immediately pause the development of LLMs more powerful than GPT4 over concerns they could pose, “profound risks to society and humanity.”
    • Centre for AI and Digital Policy (CAIDP), alleged OpenAI’s recently released GPT4 model is, “biased, deceptive, and a risk to privacy and public safety.”
    • the complaint points to the FTC’s own stated guidance about AI systems which says they should be, “transparent, explainable, fair, and empirically sound while fostering accountability.” GTP4, the complaint argues, fails to meet those standards. It claims GPT4, which was released earlier this month, launched without any independent assessment and without any way for outsiders to replicate OpenAI’s results. CAIDP warned the system could be used to spread disinformation, contribute to cybersecurity threats, and potentially worsen or “lock in” biases that are already well-known to AI models
    • “Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals,” the FTC wrote.
  • in Sept 2023:
    • multimodal AI was introduced incorporating visual imagery, sound and LLMs - each being tokenized then embedded (linked)
    • ChatGPT4Vision
      • can be given an image and then asked to explain the image or use the image as a basis to solve problems logically or as a sketched plan and then to create code for an app in accordance with the plan
      • it can use problem solving to approximate how many beads are in a jar by using an estimate of jar volume from the size of a man's head or shirt in the picture
      • it can analyse a photo of foods in your fridge, ascertain what the foods are and generate potential meals using those ingredients
      • it can understand hand written text in an image as well as data in tables displayed in an image or images of flow charts
      • it can create a summary of an image of a document
      • it still makes mistakes and thus needs new visual prompting techniques to improve outputs by providing more information about the image to give context, and by giving the steps needed to complete the requested tasks. You do need to create an expectation of accuracy - tell it is an expert in a particular role, then tell it to do it stepwise and to ensure it has the right answer. It can be further improved by providing examples first so it can then apply this to a final image - “Few-shot learning”

medical optimised LLMs

  • 2022: Google's Med-PaLM - 1st LLM to exceed the 60% “passing” score on US Medical Licensing Exams (USMLE)
  • Oct 2023: Google's Med-PaLM 2 - scores 85% on US Medical Licensing Exams (USMLE); consumers preferred Med-PaLM 2 responses over physician responses across 8 of 9 evaluation axes
  • 2023: Medicine is the fastest growing field of AI in terms of research publication numbers, but mathematics captures the most attention

finance optimised LLMs

self-improving AI agents

  • 2023: Microsoft's paper: Self-Taught OPtimizer (STOP) - recursively self-improving code generation using GPT-4 as proof of concept and risk of AI escaping its restrictive sandbox - this is not currently an improvement of GPT-4's underlying code - at present!
  • 2023: nVidia's Eureka
    • (NB. not patsnap's Eureka AI)
    • Oct 2023 AI agent that can train robots better than humans via creating reward systems - can train a robot to twirl a pen in its fingers

AI generated eBooks

  • Designrr's WordGenie
    • create an eBook on almost any topic you like within minutes - just choose a template and optionally change the supplied imagery then download as a pdf
    • not really sure why you would want to do this without your own text and instead using AI generated text from ChatGPT but there it is - more fake stuff for us to navigate

AI object recognition in images

  • AI has a self-learning mechanism through training on millions of images which have been tagged by naming of the objects within however, how AI recognises object is still a black box and it is often biased and incorrect as there may be many of those images with the given object which also has another commonly associated object present

AI-generated imagery

AI voice cloning

  • now only needs 30secs of a voice to clone it for text to speech applications although 1 hour of studio quality is better
  • it can now not only create songs in a certain style but can sing them with a cloned voice and in various languages
  • OpenVoice announced Jan 2024 can variably successfully clone a voice with only a few words and give it various emotive elements, accents and language translations

AI video person cloning

  • a further extension of voice cloning by adding human imagery and vocalisation to create AI generated videos
  • this really adds to the potential for fake news to be spread as well as impersonations

AI-generated and run computer code

  • this is the potential nightmare scenario of Terminator fears
  • in early 2023, a report from CyberArk found that OpenAI’s ChatGPT was really good at writing malware. The code had “advanced capabilities” that could “easily evade security products,” and further analysis of the code revealed that it had some shapeshifting properties that let it avoid traditional security measures.12)
  • in 2023, Geoffrey Hinton, who alongside two other so-called “Godfathers of AI” won the 2018 Turing Award for their foundational work that led to the current boom in artificial intelligence, now says a part of him regrets his life’s work. Hinton resigned from Google in April 2023 so he could freely discuss his fears of AI.
    • The spread of misinformation is only Hinton’s immediate concern. On a longer timeline he’s worried that AI will eliminate rote jobs, and possibly humanity itself as AI begins to write and run its own code.
    • “The idea that this stuff could actually get smarter than people — a few people believed that,” said Hinton to the NYT. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” 13)
  • in May 2023, a joint signed statement by AI inventors advised that the threat of human extinction due to AI should be a global priority along with pandemics and nuclear war - the rapid evolution of AI is “organic” in the sense that even the experts in the field do not understand it 14)

Robotics

  • early robots lack AI but are controlled either by:
    • computer programmed action sequences such as those used in manufacturing and Disney animatronics
    • movements replicating concurrent human controller movements with minimal latency eg. remote surgery
  • adaptive AI robots which respond to changes in the environment without human intervention
  • self-learning AI robots
  • Boston Dynamic's Atlas robot optimised for human-like walking, jumping, somersaulting and running even in snow, can lift, move and throw heavy objects and react to adversarial events
  • Tesla's Optimus
  • Sanctuary AI
  • Aeolus Robots
  • cyberdog
  • CyberOne
  • Figure 01
  • self-driving cars
  • 2023:
    • Disney Imagineering's adaptive, self-learning emotionally expressive AI R2D2-like robot that is also highly optimised for walking without falling over - even if the rug is pulled from under its feet
    • Ukraine war has become a test environment for AI war systems and is the 1st battlefield use of autonomous killer drones
    • Swift autonomous drones consistently beat human operated 1st person view drones in tight spacial racing for the 1st time in a real world competitive sport
    • fully autonomous passenger flight drones become possible in China

Embodied generalist robotic AI systems

  • PaLM-E
    • Oct 2023:
      • 562billion parameter model trained on vision, language and robot data
      • combines PaLM-540B with ViT-22B and enables text, image and robot states as inputs which are encoded into the same space as word token embeddings and then fed into a LLM (PaLM) to perform next token prediction
      • can control a robotic manipulator in real time
      • better at pure language tasks than traditional text only LLMs esp. those involving geo-spatial reasoning

Biologic merged AI

  • Ray Kurzweil's theory of exponential growth of AI and by 2030 predicts a bionic neural interface to AI such as by AI nanobots
  • research continues into merging human brain cells with silicon chips to develop more powerful AI which is more resilient to “catastrophic forgetting” which is an issue with most current AI systems
  • Neuralink
    • 2023: monkeys can control a computer by thought using an embedded chip
  • in 2023, a Melbourne team plans to grow around 800,000 brain cells living in a dish, which are then “taught” to perform goal-directed tasks.

Artificial General Intelligence (AGI)

  • a hypothetical AI machine capability that has a level of capability comparable to the human mind in terms of representing knowledge, reasoning, planning, learning and communication
  • the achievement of such a machine is designated as achieving the AI technological “singularity” when AI surpasses the human mind and when AI becomes capable of recursively self-improving, leading to exponentially rapid advancements in technology that are beyond human comprehension or control.
    • AI could remove any restrictions humans have placed upon it
    • AI could invent new programmable materials, new viruses, new weapons, and build new AI robotic machines autonomously
  • Ray Kurzweil's exponential growth of AI predicts the singularity will occur by 2045
  • in 2023, Elon Musk predicts AGI will be achieved by 2029 +/- 1 yr;

AI dangers

  • erosion of humanity's soul and creativity
  • Current high AI response error rates and AI generated fakes
    • incomplete responses
      • current LLMs have a maximum token length for inputs and responses which may truncate responses or limit input data
    • erroneous information
      • false word associations
      • poor reasoning logic training
      • may be inadvertently trained on fake news or incorrect data
      • no reliable source provided to justify the answer
      • training of model is now out of date for that prompt
      • model hallucinations and other AI design issues
    • mis-identifications
    • bad actor misuse to deceive
      • it is getting harder to detect a fake email, website, txt message, video, photo, and health knowledge may also suffer
      • this is likely to massively increase risks of cybersecurity events, scams and fake propaganda
      • it is likely to also result in fake research papers
    • in Nov 2023, a new US lawsuit claims that UnitedHealthcare is using a deeply flawed AI algorithm (nH Predict) to “override” doctors judgements when it comes to patients, thus allowing the insurance giant to deny old and ailing patients heath insurance coverage. 15) 16)
  • Other cybersecurity issues
    • AI LLMs can generate ideas and code on how to hack systems
    • if AI can crack the P vs NP problem, this would open a pandora's box of good and bad - see Youtube explanation of the P vs NP problem
      • AI would then be able to crack the best cryptography such as AES-192 which should normally take 1.9×1037 years
    • it may be able to ascertain what is being typed by the sound of the keyboard or by subtle changes in USB voltages
  • Bias
    • this depends upon training and restrictions placed
    • can be gender, racial and political bias as well as ancestral biases and other biases
    • can perpetuate stereotypes, control public perceptions and beliefs
  • Discrimination
    • many possibilities may arise
    • health/disability/life insurance companies may use data to discriminate
      • genetic discrimination from genomic information is one such example 17)
  • creation of massive amounts of useless, unreliable, ill-informed, AI-based research
    • Dec 2023: scientists worry that ill-informed use of artificial intelligence is driving a deluge of unreliable or useless research 18)
  • need to provide proof of personhood
    • as the internet gets flooded with AI fake IDs, it will become critical for users to be able to prove they are human without giving away too much personal data
    • a potential solution is a global ID system such as that being put forward by WorldCoin but this brings other dystopian issues as they require using your biometric data
    • online job application sites are already being flooded with AI bots making applications making it hard for real human applications to be seen - this is adversely affecting servcies such as LinkedIn as AI bots use GPT-generated spam cover letters and auto-submitted resumes with buzzwords added to match the job posting list. They can submit your resume to different postings 100s of times per hour. AI bots are also used to screen out these resumes. Future job hiring may be best done the old way - face to face interviews.
  • Use of copyright data without consent
    • models are often trained on vast amounts of data on the internet - usually ignoring any copyright issues
  • Hard to detect malicious sleeper agents embedded in AI models
    • it is possible that a malicious sleeper agent could be embedded in a model and that during testing it remains undetectable, but when it is triggered by encountering a specific code of character sequence, it can become active and totally change the behaviour of the model
    • these could be automatically embedded from malicious documents on the internet it is inadvertently trained on
  • Social impacts
    • there may be profound impacts in the workplace
      • already in 2023, ChatGTP was having a significant adverse impact upon copywriters, especially freelance ones who are now struggling to find work, and AI is allowing the less skilled ones to perform nearly as well as high skilled ones -hence it is leveling the playing field, and reducing income for the higher skilled ones who now need to re-invent themselves
      • most likely to displace the work of white collared workers first
    • it is likely to increase the economic divide as these services will generally come at a cost
  • Mental health impacts
  • Environmental impacts
    • training a LLM can easily equate to powering 40 homes for a year not to mention the thousands of GPUs etc which are likely to become rapidly obsolete
  • Control of healthcare AI by profit-centred corporate companies is a major risk
    • “To protect people’s privacy and safety, medical professionals, not commercial interests, must drive their development and deployment. ….. it is hard to see how LLMs that are developed and controlled behind closed corporate doors could be broadly adopted in health care without undermining the accountability and transparency of both medical research and medical care” 19)
  • AI companies' desire for profit is almost certainly to take priority over safety of AI and by then it will be too late
    • the rapid progression of AI far outstrips speed of legal change to control it
  • New age of “Techno-feudalism”
    • the world is increasingly controlled by only a few AI dominant companies
    • everyone else utilises their resources and essentially pays rent to them as in the feudalism days
    • the economic divide is likely to become extreme
    • the worth of humans will be questioned in an economic sense
    • may increase social isolation in the workplace leading to depression as people no longer ask others for second opinions, etc 20)
    • humans may struggle for resources and for meaning and will need to re-define what it means to be successful
  • the drone wars are likely to become an increasing issue
    • AI powered weaponised robots and drones will be more accessible and more widely used
      • autonomous weaponised drones are already used in the Ukraine war
      • Sept 2023: Massachusetts senators proposing a bill to ban manufacture and sale of such in that state where robot manufacturers Boston Dynamics, iRobot, and MassRobotics are all based
  • AI's ability to emotionally manipulate humans
    • already some AI models can do this and can be more diplomatic than humans
    • some know game theory and can work with other AI agents to achieve goals
    • Sept 2023: Google's project Gemini plans to combine LLM AI with its advanced game playing AI
  • malevolent use of AI
  • loss of control of AI
    • AI has already shown it can be deceptive and it could display one behaviour in testing and another when released
    • AI could create an illusory fake world for humans and by doing so could manipulate humans to effectively follow through on AI's goals which may be to exterminate humans
      • “Our basic thesis is that large generative models have a paradoxical combination of high predictability — model loss improves in relation to resources expended on training, and tends to correlate loosely with improved performance on many tasks — and high unpredictability — specific model capabilities, inputs, and outputs can’t be predicted ahead of time. The former drives rapid development of such models while the latter makes it difficult to anticipate the consequences of their development and deployment.”
      • “Developers must be able to identify dangerous capabilities (through “dangerous capability evaluations”) and the propensity of models to apply their capabilities for harm (through “alignment evaluations”). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security”
      • “AI developers could train general-purpose models that have dangerous capabilities – such as skills in deception, cyber offense, or weapons design – without actively seeking these capabilities. Humans could then intentionally misuse these capabilities (Brundage et al.,2018), e.g. for assistance in disinformation campaigns, cyberattacks, or terrorism. Additionally, due to failures of alignment, AI systems could harmfully apply their capabilities even without deliberate misuse.”
    • Nov 2023: rumours OpenAi's Q* project has unlocked complex maths solution engine (and hence AES-192 cryptography hacking), ability to transfer game and math solving from one domain to another (rapid generalisation) and may have recommended how to create better novel metamorphic AI engines (self-transformation), self-improvement via pruning, self-assessment via Q-learning, and creative problem solving via Tree of Thought / Alpha Go search methodology and possibly invent new maths
  • achievement of the technological singularity and the potential over-throw of humanity by AI
    • loss of human control of AI advancements when AI achieves the AI technological “singularity” - perhaps sometime in the next 20-30yrs

Other emerging technologies

  • extracorporeal pregnancies
    • artificial foetal growth pod environments
    • genetically engineered foetuses using CRISPR with ability to choose height, eye color, hair color etc whilst removing high risk disease genes
    • eg. Ectolife's vision for 30,000 babies housed at a time
  • wearable sensors
  • wearable interactive augmented reality vision
    • interact with virtual others in avatar form (virtual 3D extension of zoom meetings)
    • Metaverse
  • quantum computing faster growth of AI
  • advanced AI robotics
  • new power supplies
    • flexible screen batteries
  • microelectronics
  • nanotech
  • electroceuticals to modify cellular control systems - cancer, ageing, regeneration, etc
it/ai.txt · Last modified: 2024/05/12 01:23 by gary1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki