it:ai_huggingface

AI - Hugging Face

Introduction

  • Hugging Face is a website that allows AI programmers collaborate with machine learning and access models and training datasets as well as providing a space to share their own models and an environment to run or test models
  • Free registration level currently allows unlimited hosting of shared models
    • you will then need to go to settings and create a personal langchain token
    • you can then create an env.py file to store your environment settings
      • HUGGINGFACEHUB_API_TOKEN =
      • OPENAI_API_KEY = (if you wish to use ChatGTP - you need to supply your registration key for OpenAI as well)
  • quick way to get the code you need to use a model you have selected in Hugging Face, click on Deploy button, click Inference API and copy and paste the code

LangChain

  • LangChain is an integration framework designed to simplify the creation of applications using large language models (LLMs)
    • initially developed as open source in late 2022
  • install via: conda install -c conda-forge langchain

Streamlit

  • Streamlit is an open source Python module which allows creating end user GUI interfaces for users to enter value and then be shown the outcome of your code - it can then be deployed on devices from the community cloud
  • install via conda install -c conda-forge streamlit

Using a model hosted on Hugging Face

#code by Al Jason https://www.youtube.com/watch?v=_j7JEDWuqLE
from dotenv import find_dotenv, load_dotenv
from transformers import pipeline #this will allow downloading your selected model from Hugging Face
import requests #allows requesting of Hugging Face models
import os
import streamlit as st

#if using an external tool such as ChatGTP then add:
from langchain import PromptTemplate, LLMChain, OpenAI

load_dotenv(find_dotenv()) #this will retrieve and load your environment variables in env.py -  I think!
HUGGINGFACEHUB_API_TOKEN = os.getenv("HUGGINGFACEHUB_API_TOKEN")

#create a function to use pipeline to download and then run the model:
def img2text(url):
    image_to_text = pipeline("image-to-text",model="website relative path and model_name") #"image-to-text" comes from Hugging Face's list of defined tasks (https://huggingface.co/tasks)
    text = image_to_text(url)[0]["generated_text"]
    
    print(text)
    return text


#create a function to send your text above to ChatGPT to create a story of the image:
def generate_story(scenario):
    template = '''
       your text instructions to ChatGPT here; #each line needs a semicolon termination
    CONTEXT: {scenario}
    STORY:
    '''
    
    prompt=PromptTemplate(template=template, input_variables=["scenario"])
    
    story_llm = LLMChain(llm=OpenAi(
                model_name="gpt-3.5-turbo",temperature=1,prompt=prompt,verbose=True)
    
    story = story_llm.predict(scenario=scenario)
    
    print(story)
    return story
    
#now define a function to turn text to speech but this time instead of downloading a model, we will run it hosted on the website:
def text2speech(message):
    API_URL= "full http path to Hugging face model URL"
    headers={"Authorization": f:"Bearer {HUGGINGFACEHUB_API_TOKEN}"}
    payloads={
             "inputs":message
             }
    response = requests.post(API_URL,headers=headers, json=payloads)
    with open('audio.flac',wb) as file:
       file.write(response.content) #write the audio response into a local file called audio.flac
       
#now make a function to do all the above
def main():
    st.set.page.config(page_title="your project title", page_icon="@")
    st.header("Your description of what it does")
    uploaded_file = st.file_uploader("Choose an image...", type="jpg") #get end user to submit an image
    
    if uploaded_file is not NONE:
      print(uploaded_file)
      bytes_data=uploaded_file.getvalue()
      with open(uploaded_file.name,"wb") as file
        file.write(bytes_data)
      st.image(uploaded_file, caption="Uploaded image", 
               use_column_width = True)
      scenario = img2text(uploaded_file.name)
      story = generate_story(scenario)
      text2speech(story)
      
      with st.expander("scenario"):
        st.write(scenario)
      
      with st.expander("story"):
        st.write(story)
        
      st.audio("audio.flac")
      
if __name__ == '__main__':
   main()
   
#run this by using streamlit run projectfilename.py
it/ai_huggingface.txt · Last modified: 2023/08/21 11:21 by gary1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki