ChatGPT

https://wrds-jupyter.wharton.upenn.edu/

100_generative_ai_06192024.docx

Migration guide

https://platform.openai.com/docs/assistants/migration
https://github.com/openai/openai-node/discussions/217
https://github.com/openai/openai-python/discussions/742

Sources

Building a Project with the ChatGPT API
Building Apps with AI Tools: ChatGPT, Semantic Kernel, and Langchain
OpenAI API for Python Developers
OpenAI API: Code Interpreter and Advanced Data Analysis
ChatGPT for Web Developers
Building a Video Transcriber with Node.js and 
ChatGPT Tips for the Help Desk
Creating GPTs with Actions
ChatGPT: Publishing GPTs on the GPT Store
Hands-On Generative AI with Multi-Agent LangChain: Building Real-World Applications
Prompt Engineering with LangChain
Introduction to AI Orchestration with LangChain and LlamaIndex
Chat with Your Data Using ChatGPT
https://github.com/ychoi-kr/langchain-book
https://github.com/ychoi-kr/ChatGPT-API-Python
https://github.com/gilbutITbook/080413
https://github.com/wikibook/langchain
https://github.com/LinkedInLearning/building-a-project-with-the-chatgpt-api-4484140/tree/main/02_03
https://github.com/openai/openai-python
https://github.com/LinkedInLearning/creating-gpts-with-actions-4515653
https://github.com/LinkedInLearning/chat-with-your-data-using-chatgpt-3806110

Research papers

https://arxiv.org/abs/2206.14576
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4416687
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4335905
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4521096
https://arxiv.org/abs/2304.03439
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4754678
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4558295
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4399406
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4429658
https://arxiv.org/abs/2303.17564
https://www.openconf.org/FMASeoul2024/modules/request.php?module=oc_program&action=view.php&id=86&file=1/86.pdf&
https://www.openconf.org/FMASeoul2024/modules/request.php?module=oc_program&action=view.php&id=315&file=1/315.pdf&
https://www.openconf.org/FMASeoul2024/modules/request.php?module=oc_program&action=view.php&id=121&file=1/121.pdf&
https://www.openconf.org/FMASeoul2024/modules/request.php?module=oc_program&action=view.php&id=243&file=1/243.pdf&

from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "write a haiku about ai"}
    ]
)

print(completion.choices[0].message.content)

Silent circuits hum, 
Thoughts born of code and data— 
Mind without a face.


from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": "You will be provided with statements, and your task is to convert them to standard English."
},
{
"role": "user",
"content": "She no went to the market."
}
],
temperature=0.7,
max_tokens=64,
top_p=1
)

print(response.choices[0].message.content)
She did not go to the market.

Models overview

The OpenAI API is powered by a diverse set of models with different capabilities and price points. You can also make customizations to our models for your specific use case with fine-tuning.

MODEL DESCRIPTION
GPT-4o Our high-intelligence flagship model for complex, multi-step tasks
GPT-4o mini Our affordable and intelligent small model for fast, lightweight tasks
GPT-4 Turbo and GPT-4 The previous set of high-intelligence models
GPT-3.5 Turbo A fast, inexpensive model for simple tasks
DALL·E A model that can generate and edit images given a natural language prompt
TTS A set of models that can convert text into natural sounding spoken audio
Whisper A model that can convert audio into text
Embeddings A set of models that can convert text into a numerical form
Moderation A fine-tuned model that can detect whether text may be sensitive or unsafe
GPT base A set of models without instruction following that can understand as well as generate natural language or code
Deprecated A full list of models that have been deprecated along with the suggested replacement

Why Chatgpt Plugin such as WebPilot or Expedia

Because ChagtGPT alone can only provide the learned knowledge, but with Plugin we can get the latest information.

https://openai.com/index/chatgpt-plugins/

Using a ChatGPT plugin offers numerous advantages, including enhanced functionality, improved integration with existing tools, and personalized interactions. It automates repetitive tasks, and manages data efficiently, leading to cost savings and increased productivity. For instance, a ChatGPT plugin integrated into a customer support system can handle common queries, book appointments, and provide personalized product recommendations, ensuring consistent, efficient, and scalable service while freeing up human agents for more complex issues. This not only improves user experience but also offers valuable analytics for continuous improvement.

Why RAG or Fine-Tuning

https://cloud.google.com/use-cases/retrieval-augmented-generation

https://www.datacamp.com/blog/what-is-retrieval-augmented-generation-rag

Domain-Specific Knowledge: Plugins can be tailored to provide expertise in specific areas, such as medical advice, legal consultation, technical support, or financial planning.

Specialized Tasks: They can perform specialized tasks like booking appointments, making travel arrangements, or managing customer support tickets.

Tailored Interactions: Plugins allow for customized interactions based on user preferences, providing a more personalized experience.

Adaptable Responses: They can adapt their responses based on specific business rules or user data, making interactions more relevant and accurate.

LangChain Doc

https://python.langchain.com/v0.2/docs/introduction/

With LangChain, you can effortlessly combine different components and create advanced use cases like chatbots, generative question-answering, summarization, and much more. LangChain provides tools like prompt templates, LLMs from Hugging Face Hub or OpenAI, agents, and memory capabilities, enabling developers to build applications that can tackle complex language-based tasks.

Chain-of-thought prompting

From  Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou https://arxiv.org/abs/2201.11903

Semantic search

Zero shot and Few shot

Zero-Shot Example

Python Code:

import openai

openai.api_key = 'your-api-key-here'

def generate_news_article_zero_shot(topic):
   response = openai.ChatCompletion.create(
      model="gpt-4",
      messages=[
      {"role": "system", "content": "You are a professional news writer. Write a detailed and engaging news article."},
      {"role": "user", "content": f"Write a news article about {topic}."}
      ]
   )
   return response.choices[0].message['content'].strip()

news_article_zero_shot = generate_news_article_zero_shot("the recent advancements in renewable energy technologies")
print(news_article_zero_shot)

Recent Advancements in Renewable Energy Technologies

In recent years, the renewable energy sector has made significant strides in technology, leading to more efficient and cost-effective solutions for sustainable energy. One of the most notable advancements is in the field of solar power, where researchers have developed high-efficiency solar panels that are capable of converting more sunlight into electricity than ever before.

Wind energy technology has also seen remarkable progress with the development of larger and more efficient wind turbines. These new turbines are designed to generate more power with less environmental impact, making wind energy a more viable option for large-scale energy production.

Energy storage solutions have also advanced, allowing for better storage and distribution of renewable energy. This includes the development of high-capacity batteries that can store excess energy generated by renewable sources, ensuring a steady supply of electricity even when the sun isn’t shining or the wind isn’t blowing.

These advancements are paving the way for a more sustainable future, reducing our dependence on fossil fuels and helping to combat climate change. As technology continues to improve, the potential for renewable energy to become a major player in the global energy market grows ever closer to reality.

Few-Shot Example

Python Code:

def generate_news_article_few_shot(topic):
  few_shot_prompt = """
  Write a news article about the recent advancements in renewable energy technologies.
  Example 1:
  Title: Solar Power Breakthrough: New Panels Increase Efficiency by 50%
  In a major breakthrough for the renewable energy sector, scientists have developed new solar panels that increase efficiency by 50%. These panels, made from advanced perovskite materials, are not only more efficient but also more affordable, paving the way for widespread adoption.
  
  Example 2:
  Title: Wind Energy Takes a Leap Forward with Giant Turbines
  The renewable energy industry is celebrating another milestone with the introduction of giant wind turbines that can generate double the power of previous models. These new turbines, designed for offshore wind farms, promise to significantly boost the capacity of wind energy projects worldwide.
  Now, write a detailed and engaging news article about the recent advancements in renewable energy technologies.
  """
  response = openai.ChatCompletion.create(
     model="gpt-4",
     messages=[
     {"role": "system", "content": "You are a professional news writer."},
     {"role": "user", "content": few_shot_prompt}
     ]
  )
  return response.choices[0].message['content'].strip()

news_article_few_shot = generate_news_article_few_shot("recent advancements in renewable energy technologies")
print(news_article_few_shot)
Output:

Title: Breakthroughs in Renewable Energy Technologies: A Step Towards a Greener Future

The renewable energy sector continues to witness significant advancements, pushing the boundaries of what is possible and paving the way for a greener future. Among the most notable developments is the introduction of highly efficient solar panels made from advanced perovskite materials. These new panels not only increase efficiency by up to 50% but also reduce production costs, making solar energy more accessible to a broader population.

Wind energy has also seen remarkable progress with the development of giant turbines designed for offshore wind farms. These turbines can generate twice the power of previous models, significantly boosting the capacity and efficiency of wind energy projects worldwide. The enhanced design and technology behind these turbines promise to make wind energy a more formidable player in the renewable energy market.

Energy storage technologies are also evolving rapidly. High-capacity batteries capable of storing excess energy from renewable sources are now more efficient and have longer lifespans. This advancement ensures a steady supply of electricity even when natural energy sources are not at their peak, addressing one of the most significant challenges in the renewable energy sector.

Additionally, smart grid technology is being integrated to optimize the distribution and consumption of renewable energy. These grids use artificial intelligence to manage the flow of electricity, reduce wastage, and seamlessly incorporate renewable energy sources into the existing infrastructure.

Governments and private enterprises are increasingly investing in renewable energy technologies, driven by the need to combat climate change and reduce dependence on fossil fuels. Policies and incentives aimed at promoting clean energy adoption are accelerating these advancements, bringing us closer to a sustainable future.

In conclusion, the recent advancements in renewable energy technologies are a testament to the sector‘s potential to transform our energy landscape. As these technologies continue to evolve and improve, they promise to make renewable energy more efficient, affordable, and accessible, driving us towards a world powered by clean and sustainable energy.

Comparison

  • Zero-Shot Approach: The model generates a response based on the given prompt without any examples. The result is generally informative but may lack specific structure or depth compared to more tailored outputs.
  • Few-Shot Approach: The model uses provided examples to understand the desired format and style better. The result is typically more refined, with a clearer structure, engaging title, and a more professional tone, closely matching the given examples.

The few-shot approach provides a more polished and structured output by giving the model specific examples to mimic, resulting in higher quality and more contextually relevant content.

OpenAI API

To access the API, visit the OpenAI Platform and sign up or log in.
To access ChatGPT prompt – response site https://chatgpt.com/

Different OpenAI models: https://platform.openai.com/docs/models/gpt-4o
To find more about InstructGPT, read this paper  https://arxiv.org/abs/2203.02155

The ChatGPT API allows developers to integrate ChatGPT into their own applications, products, or services. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response. Learn more about ChatGPT in the blog post. To learn more about the ChatGPT API, please visit our API articles.

ChatGPT Plus Subscription: ChatGPT Plus is a subscription plan for the ChatGPT product, which provides enhanced features for the ChatGPT interface, such as access during peak times, faster response times, and priority access to new features.  Cost: Typically, ChatGPT Plus is offered at a monthly subscription fee (e.g., $20/month). The ChatGPT Plus subscription does not include access to the ChatGPT API and are billed separately. The API has its own pricing, which can be found at https://openai.com/pricing.  https://community.openai.com/t/api-access-without-paying-for-a-chatgpt-plus-subscription/330848

Plus For individuals looking to amplify their productivity

Early access to new features,
Access to GPT-4, GPT-4o, GPT-3.5
Up to 5x more messages for GPT-4o
Access to data analysis, file uploads, vision, and web browsing
DALL•E image generation
Create and use custom GPTs
$20/ month

OpenAI API Access: The OpenAI API allows developers to access OpenAI’s models (like GPT-3 and GPT-4) for a variety of use cases including but not limited to conversational AI, text generation, summarization, and more. Billing: The OpenAI API operates on a pay-as-you-go model, where you are billed based on the number of tokens you use. Different models (Davinci, Curie, etc.) have different pricing.

1. Separate Billing Systems: The billing for ChatGPT Plus and the OpenAI API are handled separately. Paying for ChatGPT Plus does not automatically grant you access to the OpenAI API or cover the costs associated with it.
2. Different Use Cases: ChatGPT Plus is for enhanced use of the ChatGPT interface, while the OpenAI API is intended for developers who want to integrate the models into their own applications.

How to Use the OpenAI API

1. Sign Up: Create an account on the OpenAI website if you don’t already have one.
2. API Key: Obtain an API key from the OpenAI dashboard.
3. Billing Information: Set up billing information in the OpenAI dashboard. You might need to provide payment details and set up a billing account separate from your ChatGPT Plus subscription.
4. Pay-as-You-Go: Use the API on a pay-as-you-go basis, where you will be billed based on your usage.

The calls to the APIs are limited using the TPM, or token per minute, which differs based on the model. You are also billed per token when using the API. For instance, gpt-3.5-turbo costs $0.002 per 1,000 tokens. You are billed based on the number of tokens sent in your prompt, plus the number of tokens in the completions returned by the API. We’ll discuss the pricing model and ways to effectively manage your costs in an upcoming lesson.

For API: Let’s look at ways to estimate the usage costs for advanced data analysis and the code interpreter tool. ChatGPT Plus and ChatGPT Enterprise provide unlimited access to the advanced data analysis feature. ChatGPT plus is $20 a month as of January, 2024. For pricing on ChatGPT Enterprise, you’ll need to contact OpenAI support. Now let’s talk about the price for the code interpreter tool from the assistance API. Code interpreter is charged at 0.03 cents per session. A session is based on a thread. If your assistant calls the code interpreter tool simultaneously in two different threads, you’re charged for two threads, three for three, and so on. Each session is active for one hour. You can interact with that session multiple times, only paying once. This means you can keep giving instructions to code interpreter in the same thread for up to one hour without incurring additional charges. You do not pay for files attached to the assistant or message when used with code interpreter. Don’t forget about the standard per token fee when using OpenAI’s models. Let’s discuss how that intersects with messages, threads, and the assistance API. While you only send a single new message to a thread, all past messages are stored server side, and used with every interaction with the backend model. This means the entire conversation history is passed to the model on each interaction, so you’re paying for all the tokens during each exchange every time. Remember this when you are estimating the price, tokens accumulate over time every time you add a new message.

How to set up my API key in Colab and authenticate?

# Step 1: Install the OpenAI library
!pip install openai
!pip install python-dotenv
from google.colab import drive
drive.mount('/content/drive')

path = '/content/drive/My Drive/Colab Notebooks/openai/'

import os
os.chdir('/content/drive/My Drive/Colab Notebooks/openai/')

# Create a .env file and write the OpenAI API key to it
with open(path+'.env', 'w') as file:
 file.write('OPENAI_API_KEY=enter your key here')
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file

from openai import OpenAI
client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])
# Step 3: Define the function to interact with the ChatGPT API
def get_chatgpt_response(messages):
    response = openai.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=messages,
    )
    return response['choices'][0]['message']['content'].strip()

# Step 4: Example usage
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain the theory of relativity in simple terms."}
]
response = get_chatgpt_response(messages)
print(response)

This setup ensures your API key is securely stored and used for authentication while interacting with the OpenAI API in Google Colab.

Security Considerations:

  • When sharing notebooks, ensure you do not accidentally expose your API key. Do not hard-code the API key directly in the cells if you plan to share the notebook.
  • Instead, prompt users to input their API key manually if they run the notebook. Here’s an example of how to prompt for the API key:
import os
import openai
from getpass import getpass

# Prompt the user to input the API key securely
api_key = getpass("Enter your OpenAI API key: ")
os.environ["OPENAI_API_KEY"] = api_key

# Authenticate using the API key from the environment variable
openai.api_key = os.getenv("OPENAI_API_KEY")

# Define the function to interact with the ChatGPT API
def get_chatgpt_response(messages):
    response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=messages,
    )
    return response['choices'][0]['message']['content'].strip()

# Example usage
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain the theory of relativity in simple terms."}
]
response = get_chatgpt_response(messages)
print(response)

OpenAI API and ChatGPT API are the same?

OpenAI provides various APIs, and the term “ChatGPT API” generally refers to a specific endpoint of the OpenAI API designed for interacting with the GPT-3.5 model (or similar models). The OpenAI API encompasses a broader range of functionalities, including:

1. Completion API: This API allows you to generate text based on a given prompt. This is commonly used to access models like GPT-3.5 for various natural language processing tasks.

2. Chat API: This API is specifically designed for conversation-like interactions, where you can send a series of messages and receive responses that simulate a chat experience.

3. Fine-tuning API: This allows you to fine-tune existing models on your own dataset to better suit your specific needs.

4. Other APIs: OpenAI also offers APIs for image generation (like DALL-E), code completion (like Codex), and other functionalities.

For the purpose of using GPT-3.5 in a chat-like manner, you typically use the Completion API or the Chat API endpoint. Here’s how you can use the Chat API in Google Colab:

import openai

clinet=OpenAI()

# Define a function to interact with the ChatGPT API
def get_chatgpt_response(messages):
    response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=messages,
    )
    return response['choices'][0]['message']['content'].strip()

# Example usage
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain the theory of relativity in simple terms."}
]
response = get_chatgpt_response(messages)
print(response)

– Define a function to get a response from ChatGPT: This function sends a list of messages to the API and returns the response. The `messages` list includes a system message to set the behavior and user messages to simulate a conversation.
– Example usage: This part demonstrates how to use the function with a sample conversation.

The `messages` list should follow the format where each message is a dictionary with a `role` (either “system”, “user”, or “assistant”) and `content` (the text of the message). The `system` message sets the context or behavior of the assistant, while `user` messages are from the user, and `assistant` messages are the responses from the model.

By using this structure, you can create more interactive and context-aware conversations with the ChatGPT model.

Terminologies

OpenAI Glossary Cheat Sheet

Temperature: Lower values for temperature result in more consistent output (e.g. 0.2), while higher values generate more diverse and creative results (e.g. 1.0).

Function calling: the capability of the model generates JSON that could be used to call the custom function.

prompt

Assistant

Perplexity: A measure indicating how well a probability model predicts a sample. In the context of NLP, a lower perplexity score implies a better prediction of the next word in a sequence

Generating embeddings is another powerful feature offered by OpenAI. It allows developers to convert text into numerical vectors, which can be used for various machine learning tasks.

ChatGPT plugins are a recent addition to the OpenAI ecosystem, designed to enhance the functionality of ChatGPT by integrating external services.

LLaMA (Low-Latency and Memory-Efficient Large-Scale Model Attention) is released by Meta AI in February 2023, which was trained on a significantly lower number of parameters

How to estimate API cost

https://platform.openai.com/tokenizer

ChatGPT parameters

ChatGPT parameters refer to various settings and configurations that influence how the model behaves and generates responses. Understanding these parameters and their uses can help you tailor ChatGPT to meet specific needs effectively. Here are some key parameters and how they can be used:

1. Temperature

  • Definition: Controls the randomness of the model’s output.
  • Range: Typically between 0.0 and 1.0.
  • Use: Lower values (e.g., 0.2) make the output more focused and deterministic, while higher values (e.g., 0.8) make the output more diverse and creative.
  • Example: For generating creative writing or brainstorming ideas, use a higher temperature. For factual and precise responses, use a lower temperature.

2. Max Tokens

  • Definition: Limits the maximum number of tokens (words or parts of words) in the generated response.
  • Use: Helps control the length of the response to suit specific requirements.
  • Example: For short and concise answers, set a low max token limit (e.g., 50). For detailed explanations or narratives, set a higher limit (e.g., 150).

3. Top-p (Nucleus Sampling)

  • Definition: Controls the cumulative probability of token choices.
  • Range: Between 0.0 and 1.0.
  • Use: Limits the model to choosing from the top subset of tokens that have a cumulative probability of p. This helps balance between focusing and generating diverse outputs.
  • Example: For ensuring that only the most likely tokens are chosen, use a lower top-p value (e.g., 0.1). For more variety in responses, use a higher value (e.g., 0.9).

4. Frequency Penalty

  • Definition: Reduces the likelihood of the model repeating the same token.
  • Range: Typically between 0.0 and 1.0.
  • Use: Increases the diversity of the generated text by penalizing frequent tokens.
  • Example: For generating unique content without repetition, apply a frequency penalty (e.g., 0.5).

5. Presence Penalty

  • Definition: Encourages the model to introduce new tokens that have not appeared yet in the text.
  • Range: Typically between 0.0 and 1.0.
  • Use: Promotes the inclusion of new ideas and concepts in the response.
  • Example: For creative writing or generating novel ideas, use a presence penalty (e.g., 0.6).

6. Stop Sequences

  • Definition: Specifies one or more sequences where the model will stop generating further tokens.
  • Use: Controls the endpoint of the response, ensuring it stops at a logical point.
  • Example: To ensure responses end with a specific phrase or within a certain context, define a stop sequence like [“\n”, “.”, “!”].

Example Use Case: Customer Support Chatbot

Suppose you are deploying a ChatGPT plugin as a customer support chatbot. You might configure the parameters as follows:

  • Temperature: 0.3 (to ensure precise and helpful responses)
  • Max Tokens: 100 (to provide detailed yet concise answers)
  • Top-p: 0.9 (to allow some variety in responses while maintaining relevance)
  • Frequency Penalty: 0.2 (to avoid repetitive answers)
  • Presence Penalty: 0.1 (to encourage slightly more diverse responses without deviating too much)
  • Stop Sequences: [“\n”, “.”, “!”] (to ensure the response ends cleanly)

With these settings, the chatbot will deliver precise, helpful, and well-rounded answers to customer inquiries, maintaining professionalism and efficiency while allowing for some natural variation in responses.

An example chat completion API call

A chat completion API call parameters, Required

  • model: the name of the model you want to use (e.g., gpt-3.5-turbogpt-4gpt-3.5-turbo-16k-1106)
  • messages: a list of message objects, where each object has two required fields:
    • role: the role of the messenger (either systemuserassistant or tool)
    • content: the content of the message (e.g., Write me a beautiful poem)

Messages can also contain an optional name field, which give the messenger a name. E.g., example-userAliceBlackbeardBot. Names may not contain spaces.

Optional

  • frequency_penalty: Penalizes tokens based on their frequency, reducing repetition.
  • logit_bias: Modifies likelihood of specified tokens with bias values.
  • logprobs: Returns log probabilities of output tokens if true.
  • top_logprobs: Specifies the number of most likely tokens to return at each position.
  • max_tokens: Sets the maximum number of generated tokens in chat completion.
  • n: Generates a specified number of chat completion choices for each input.
  • presence_penalty: Penalizes new tokens based on their presence in the text.
  • response_format: Specifies the output format, e.g., JSON mode.
  • seed: Ensures deterministic sampling with a specified seed.
  • stop: Specifies up to 4 sequences where the API should stop generating tokens.
  • stream: Sends partial message deltas as tokens become available.
  • temperature: Sets the sampling temperature between 0 and 2.
  • top_p: Uses nucleus sampling; considers tokens with top_p probability mass.
  • tools: Lists functions the model may call.
  • tool_choice: Controls the model’s function calls (none/auto/function).
  • user: Unique identifier for end-user monitoring and abuse detection.

Models overview

The OpenAI API is powered by a diverse set of models with different capabilities and price points. You can also make customizations to our models for your specific use case with fine-tuning.

MODEL DESCRIPTION
GPT-4o The fastest and most affordable flagship model
GPT-4 Turbo and GPT-4 The previous set of high-intelligence models
GPT-3.5 Turbo A fast, inexpensive model for simple tasks
DALL·E A model that can generate and edit images given a natural language prompt
TTS A set of models that can convert text into natural sounding spoken audio
Whisper A model that can convert audio into text
Embeddings A set of models that can convert text into a numerical form
Moderation A fine-tuned model that can detect whether text may be sensitive or unsafe
GPT base A set of models without instruction following that can understand as well as generate natural language or code
Deprecated A full list of models that have been deprecated along with the suggested replacement

Endpoints

Audio – creative speech

from pathlib import Path
import openai

speech_file_path = Path(__file__).parent / “speech.mp3″
response = openai.audio.speech.create(
model=”tts-1″,
voice=”alloy”,
input=”The quick brown fox jumped over the lazy dog.”
)
response.stream_to_file(speech_file_path)

Create chat completion

from openai import OpenAI
client = OpenAI()

completion = client.chat.completions.create(
model="gpt-3.5-turbo-16k",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)

print(completion.choices[0].message)


Create embeddings

from openai import OpenAI
client = OpenAI()

client.embeddings.create(
model="text-embedding-ada-002",
input="The food was delicious and the waiter...",
encoding_format="float"
)


Create fine-tuning job

from openai import OpenAI
client = OpenAI()

client.fine_tuning.jobs.create(
training_file="file-abc123",
model="gpt-3.5-turbo"
)


and more ….

GPTs vs Assistants

GPTs

GPTs are custom versions of ChatGPT that users can tailor for specific tasks or topics by combining instructions, knowledge, and capabilities. They can be as simple or as complex as needed, addressing anything from language learning to technical support. Plus, Team, and Enterprise users can start creating GPTs at chatgpt.com/create. GPT’s live inside of ChatGPT and are intended to be created by anyone with a paid subscrtiption using a simple UI.

Assistants

The Assistants API allows you to build AI assistants within your own applications. An Assistant has instructions and can leverage models, tools, and knowledge to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, Retrieval, and Function calling. Assistants are designed to be created by developers using the OpenAI API.


PROMPT: Open AI provides Assistant API and also a service of creating a GPT. The relevant sites are https://platform.openai.com/docs/assistants/overview and https://help.openai.com/en/articles/8554397-creating-a-gpt. Explain the main differences between these two services. And provide the examples with Python codes.

Certainly! Let’s explore the main differences between the OpenAI Assistant API and the service for creating a GPT, using the information from the provided links. Additionally, I’ll provide examples with Python codes for each service.

### OpenAI Assistant API

Purpose:
The Assistant API is designed to create and deploy conversational AI assistants that can handle a wide range of tasks, such as answering questions, providing information, and assisting with various user needs in an interactive manner.

Key Features:
– Conversational Capabilities: The API is optimized for maintaining context over multiple turns in a conversation.
– Customizability: Developers can tailor the assistant’s behavior and responses according to specific use cases.
– Integration: Easily integrates with other applications and services to perform tasks such as booking, scheduling, or retrieving information.

Example with Python Code:

Here is an example of using the OpenAI Assistant API with Python:

import openai

# Set your API key here
openai.api_key = 'YOUR_OPENAI_API_KEY'

response = openai.chat.completions.create(
engine="davinci-codex",
prompt="You are a helpful assistant. How can I help you today?",
max_tokens=150,
n=1,
stop=None,
temperature=0.9,
)

print(response.choices[0].text.strip())

### Creating a GPT

Purpose:
Creating a GPT involves training or fine-tuning a GPT model for generating text based on specific needs. This service allows users to create customized versions of GPT models that can generate coherent and contextually relevant text for various applications.

Key Features:
– Text Generation: Capable of generating human-like text based on given prompts.
– Customization: Users can fine-tune the model on specific datasets to improve performance on particular tasks or domains.
– Versatility: Can be used for a wide range of applications, including content creation, language translation, summarization, and more.

Example with Python Code:

Here is an example of using OpenAI’s GPT for text generation with Python:

import openai

# Set your API key here
openai.api_key = 'YOUR_OPENAI_API_KEY'

response = openai.chat.completions.create(
engine="davinci",
prompt="Once upon a time in a faraway land, there was a brave knight named Arthur. He embarked on a quest to find...",
max_tokens=150,
n=1,
stop=None,
temperature=0.7,
)

print(response.choices[0].text.strip())

### Key Differences

1. Purpose and Focus:
– Assistant API: Focuses on creating interactive, task-oriented assistants that can handle multi-turn conversations and integrate with various services.
– Creating a GPT: Centers on generating text and customizing the model for specific text-based applications.

2. Use Cases:
– Assistant API: Ideal for building customer service bots, virtual assistants, and other interactive applications that require maintaining conversational context.
– Creating a GPT: Suitable for content generation, text completion, summarization, and other language-related tasks.

3. Customization and Fine-Tuning:
– Assistant API: Customizes the assistant’s behavior to fit specific interactive use cases.
– Creating a GPT: Fine-tunes the GPT model on particular datasets to enhance performance in specific domains or tasks.

These differences highlight the specific capabilities and ideal use cases for each service, providing developers with the tools they need to build either conversational assistants or powerful text generation models.

Creating a GPT

https://help.openai.com/en/articles/8554397-creating-a-gpt

Creating a GPT offers several benefits, particularly for those looking to tailor AI interactions to specific tasks or topics. One key advantage is customization. By creating a GPT, users can design a chatbot that fits their exact needs, whether it’s for language learning, technical support, or any other specific task. This customization includes setting detailed instructions, defining the GPT’s behavior, and incorporating additional capabilities such as web browsing, image generation with DALL·E, and advanced data analysis

Additionally, creating a GPT allows for integration with third-party APIs and existing plugins, enabling users to extend the functionality of their GPTs even further. This flexibility makes it easier to build sophisticated applications without requiring extensive coding knowledge, as the GPT Builder provides a user-friendly interface for setting up and configuring these custom AI models

Another benefit is the ability to share and publish GPTs. Users can publish their GPTs to the GPT Store, making them accessible to a broader audience and potentially receiving feedback and recognition for their creations. This can also foster a vibrant ecosystem where creators are rewarded for their innovative contributions

Overall, the main benefits of creating a GPT include customization to fit specific needs, extended capabilities through integrations, and the opportunity to share and monetize these custom solutions. For more detailed information, you can visit the [OpenAI Help Center](https://help.openai.com/en/articles/8554397-creating-a-gpt).

Assistant API

https://platform.openai.com/docs/assistants/overview

The Assistants API allows you to build AI assistants within your own applications. An Assistant has instructions and can leverage models, tools, and files to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, File Search, and Function calling.

The code interpreter tool is accessible to anyone with an OpenAI account, no need to be a paid subscriber. Several text file types are supported along with image file formats. Additionally, tar files are supported. What I really like about the code interpreter tool is that it can read and generate files and produce graphics like charts, just like the data analysis tool, but you can control it programmatically through code, which is a developer’s dream.

Let’s look at the flow. First, you’ll create an assistant through code. Assistants can call open AI models with specific instructions.

Next, you’ll start a thread, which is a conversation between the user and the assistant. You create a thread once, and simply append messages as your users reply, threads store the conversation history for you.

Next, you’ll create a run which invokes an assistant on a thread, kicking off the conversation. The process is straightforward. I can’t wait to show you the flow and Python code. Now that you understand how to access the code interpreter tool via the Assistants API, let’s see how to use the advanced data analysis tool in the real world.

Capabilities / Examples

prompt examples: https://platform.openai.com/docs/examples

assistant API: https://platform.openai.com/docs/assistants/overview

Looking for more ideas? Check out our Examples or the OpenAI Cookbook on GitHub.


specific tasks:

embedding: https://platform.openai.com/docs/tutorials/web-qa-embeddings

Creating slides with the Assistants API and DALL·E 3: https://cookbook.openai.com/examples/creating_slides_with_assistants_api_and_dall-e3

code interpreter: https://platform.openai.com/docs/assistants/tools/code-interpreter

pip install openai

from openai import OpenAI

client = OpenAI()
# Install the OpenAI package
!pip install openai

# Import the OpenAI package
import openai

# Set your OpenAI API key
openai.api_key = "your-api-key"

# Make a request to the OpenAI API
response = openai.chat.completions.create(
engine="text-davinci-003", # You can use different engines like "gpt-3.5-turbo" or "davinci"
prompt="Once upon a time, in a land far, far away,",
max_tokens=100,
n=1,
stop=None,
temperature=0.7,
)

# Print the generated text
generated_text = response.choices[0].text.strip()
print(generated_text)

Creating a comprehensive user-friendly manual

Table of Contents

  1. Introduction
    • What is OpenAI API?
    • Key Features
    • Use Cases
  2. Getting Started
    • Account Setup
    • API Key Generation
  3. Installation
    • Installing OpenAI Python Package
    • Basic Setup
  4. Basic Usage
    • Making Your First API Call
  5. Advanced Usage
    • Customizing API Requests
    • Handling Responses
    • Error Handling
  6. Common Use Cases
    • Text Generation
    • Summarization
    • Conversation
    • Translation
  7. Code Examples
    • Detailed Examples with Explanations
  8. Best Practices
    • Rate Limiting
    • Security Considerations
  9. Troubleshooting
    • Common Issues and Fixes
  10. FAQ
    • Frequently Asked Questions

1. Introduction

What is OpenAI API?

The OpenAI API provides access to advanced AI models, enabling developers to integrate natural language understanding, generation, and other AI capabilities into their applications.

Key Features

  • Text Generation: Generate human-like text based on a given prompt.
  • Summarization: Create summaries from longer texts.
  • Conversation: Build interactive chatbots.
  • Translation: Translate text between languages.

Use Cases

  • Chatbots and virtual assistants
  • Content creation
  • Automated customer support
  • Language translation

2. Getting Started

Account Setup

To use the OpenAI API, you need to create an account on the OpenAI website. Follow these steps:

  1. Go to OpenAI Signup Page.
  2. Fill in the required information and verify your email.
  3. Log in to your OpenAI account.

API Key Generation

  1. Navigate to the API keys section in your account dashboard.
  2. Click “Create new secret key”.
  3. Copy and securely store your API key.

3. Installation

Installing OpenAI Python Package

To interact with the OpenAI API using Python, you need to install the openai package. You can install it using pip:

pip install openai

Basic Setup

import openai
# Set up your OpenAI API key
openai.api_key = "your-api-key-here"

4. Basic Usage

Making Your First API Call

Here’s an example of generating a completion for a given prompt:

import openai
openai.api_key = "your-api-key-here"
response = openai.Completion.create(
  engine="text-davinci-003",
prompt="Once upon a time in a land far, far away",
max_tokens=50
)
print(response.choices[0].text.strip())

5. Advanced Usage

Customizing API Requests

You can customize your API requests with various parameters:

  • engine: The model to use (e.g., text-davinci-003).
  • prompt: The input text for the model.
  • max_tokens: The maximum number of tokens to generate.
  • temperature: Controls the randomness of the output (0.0 to 1.0).
  • top_p: Controls the diversity via nucleus sampling.

Example:

response = openai.Completion.create(
  engine="text-davinci-003",
prompt="Explain the theory of relativity in simple terms",
max_tokens=150,
temperature=0.5,
top_p=1
)
print(response.choices[0].text.strip())

Handling Responses

The response object contains various information:

response = openai.Completion.create(
  engine="text-davinci-003",
prompt="Tell me a joke",
max_tokens=50
)
text = response.choices[0].text.strip()
print(text)

Error Handling

Handle errors gracefully to improve the user experience:

try:
   response = openai.Completion.create(
     engine="text-davinci-003",
prompt="Generate a poem about the sea",
max_tokens=100
   )
   print(response.choices[0].text.strip())
exceptopenai.error.OpenAIError as e:
   print(f"An error occurred: {e}")

6. Common Use Cases

Text Generation

response = openai.Completion.create(
  engine="text-davinci-003",
prompt="Write a story about a brave knight",
max_tokens=200
)
print(response.choices[0].text.strip())

Summarization

response = openai.Completion.create(
  engine="text-davinci-003",
prompt="Summarize the following text:\n\n[Insert long text here]",
max_tokens=100
)
print(response.choices[0].text.strip())

Conversation

response = openai.Completion.create(
  engine="text-davinci-003",
prompt="User: What is the weather today?\nAI:",
max_tokens=50
)
print(response.choices[0].text.strip())

Translation

response = openai.Completion.create(
   engine="text-davinci-003",
prompt="Translate the following English text to French: 'Hello, how are you?'",
max_tokens=60
)
print(response.choices[0].text.strip())

7. Code Examples

Example 1: Generating Blog Post Ideas

response = openai.Completion.create(
  engine="text-davinci-003",
   prompt="Generate some blog post ideas about artificial intelligence",
   max_tokens=100
)
print(response.choices[0].text.strip())

Example 2: Creating a Chatbot

def chatbot_response(prompt):
   response = openai.Completion.create(
     engine="text-davinci-003",
prompt=prompt,
max_tokens=150
    )
    return response.choices[0].text.strip()
while True:
   user_input = input("You: ")
   if user_input.lower() == "exit":
        break
   print("AI:", chatbot_response(f"User: {user_input}\nAI:"))

8. Best Practices

Rate Limiting

Respect the rate limits to avoid interruptions:

  • Monitor your usage in the OpenAI dashboard.
  • Implement exponential backoff for retries.

Security Considerations

  • Keep your API key secure and do not expose it in client-side code.
  • Rotate API keys regularly.

9. Troubleshooting

Common Issues and Fixes

  • Invalid API Key: Ensure your API key is correct and has not expired.
  • Rate Limit Exceeded: Monitor your usage and implement backoff strategies.
  • Unexpected Output: Adjust the temperature and top_p parameters.

10. FAQ

Frequently Asked Questions

Q: How do I change the response length?

A: Adjust the max_tokens parameter in your API request.

Q: What models are available?

A: Check the OpenAI documentation for the latest available models.

Q: How do I handle sensitive content?

A: Use OpenAI’s content filtering tools and review outputs before using them in production.


This outline provides a comprehensive guide to using the OpenAI API with specific Python examples. You can expand each section with more detailed explanations, additional code snippets, and relevant links to the official OpenAI documentation for further reading.

teach me how to use openai api and google api to write news article

ChatGPT

To write a news article using the OpenAI API and Google API, you’ll need to integrate both services to gather information and generate content. Here’s a step-by-step guide:

  1. Set Up Your Environment

First, ensure you have Python installed. You’ll also need to install the openai and google-api-python-client packages.

pip install openai google-api-python-client
  1. Get API Keys
  • OpenAI API Key: Sign up at OpenAI, then navigate to the API keys section to generate a key.
  • Google API Key: Create a project on the Google Cloud Console, enable the necessary APIs (e.g., Google News API, Custom Search API), and create an API key.
  1. Basic Setup

OpenAI Setup

import openai

openai.api_key = 'your-openai-api-key'

Google API Setup

from googleapiclient.discovery import build

# Replace with your Google API key

google_api_key = 'your-google-api-key'

cse_id = 'your-custom-search-engine-id'  

# Create this in the Google Custom Search Engine dashboard
  1. Define Functions to Fetch News and Generate Content

Fetch News Articles

Create a function to search for news articles using Google’s Custom Search API.

def fetch_news(query):

    service = build("customsearch", "v1", developerKey=google_api_key)

    res = service.cse().list(q=query, cx=cse_id, num=5).execute()

    return res['items']

Generate Content with OpenAI

Create a function to generate a news article using OpenAI’s GPT-4.

def generate_article(prompt):

    response = openai.Completion.create(

        engine="text-davinci-003",

        prompt=prompt,

        max_tokens=600,

        temperature=0.7,

    )

    return response.choices[0].text.strip()
  1. Integrate and Generate News Articles

Combine the fetched news and content generation to create a full news article.

def create_news_article(query):

    # Fetch news articles

    articles = fetch_news(query)

   

    # Generate a summary or introductory paragraph

    intro_prompt = f"Write an introductory paragraph for a news article about the following headlines:\n"

    for article in articles:

        intro_prompt += f"- {article['title']}\n"

   

    intro = generate_article(intro_prompt)

   

    # Create the body of the article

    body = ""

    for article in articles:

        title = article['title']

        snippet = article.get('snippet', '')

        link = article['link']

       

        body_prompt = f"Write a detailed paragraph about the following headline and snippet:\n"

        body_prompt += f"Headline: {title}\nSnippet: {snippet}\n"

       

        body += generate_article(body_prompt) + "\n"

        body += f"Read more at: {link}\n\n"

   

    # Combine intro and body

    news_article = intro + "\n\n" + body

    return news_article




# Example usage

query = "latest technology news"

article = create_news_article(query)

print(article)
  1. Example Output

When you run the script with a specific query (e.g., “latest technology news”), it will fetch recent news articles using Google’s Custom Search API and generate a comprehensive news article using OpenAI’s GPT-4.

Summary

  • Set Up Environment: Install necessary libraries and get API keys.
  • Fetch News: Use Google Custom Search API to fetch recent news articles.
  • Generate Content: Use OpenAI API to generate the article content.
  • Integrate: Combine fetched news data and generated content to create a full news article.

This approach leverages the strengths of both APIs: Google’s capability to fetch up-to-date information and OpenAI’s ability to generate coherent and engaging content.

Print Friendly, PDF & Email