How to use the gpt-4-turbo API

00 min
Nov 13, 2023
Nov 13, 2023
This article describes how to use the gpt-4-turbo API. First, the features and functions of gpt-4-turbo are introduced, including text generation, dialogue system, translation, summary generation, etc. It then details how to use gpt-4-turbo via API requests, including sample code and return results. Also covered is how to use JSON schema and gpt-4-1106-preview models. Reference links are provided at the end.

How to use the gpt-4-turbo API

What is gpt-4-turbo

GPT-4 Turbo is the latest generation model from OpenAI. It is more powerful than the previous GPT-4 model, with an updated knowledge cut-off date (as of April 2023) and introduces a 128k context window (equivalent to 300 pages of text). Compared to the original GPT-4 model, the input token price of this model has been reduced by 3 times, and the output token price has been reduced by 2 times. The model's maximum output token count is 4096. Anyone with an OpenAI API account can access this model. You can set the model name as "gpt-4-1106-preview" in the API to use the GPT-4 Turbo model.
notion image

What can the gpt4 turbo API do

The GPT-4 Turbo API can be used for the following tasks:
  1. Text generation: You can use the GPT-4 Turbo API to generate articles, stories, poetry, and other text content.
  1. Conversational systems: You can build chatbots or conversational systems using the GPT-4 Turbo API to generate dialogue and respond to user questions.
  1. Translation: The GPT-4 Turbo API can be used for text translation, translating text from one language to another.
  1. Summary generation: You can use the GPT-4 Turbo API to generate summaries of text, extracting key information and main content.
  1. Content creation: The GPT-4 Turbo API can help creators generate various types of content, such as blog articles, product descriptions, and advertising copy.
  1. Question answering: You can use the GPT-4 Turbo API to answer user questions, providing accurate and useful answers.
  1. Educational assistance: The GPT-4 Turbo API can be used in the education field to help students answer questions and provide study materials.
  1. News generation: You can use the GPT-4 Turbo API to generate news articles, news summaries, and other news content.

Creating a gpt4 turbo API request

In this example, I want ChatGPT to act as a mental health counselor. Now, how do we make ChatGPT answer mental health-related questions through code? Let's refer to the following method, and then we, as patients, seek help from it as a psychological counselor.
Here we have specified the ChatGPT's model as the latest gpt-4-1106-preview, the system role as psychological counselor, and assigned a knowledge system related to psychology to this psychological counselor. It has a good professional ethic and can provide friendly counseling to the client.
Then, I played the role of a patient and asked my question to the gpt-4-1106-preview model. I can't sleep well at night. I wake up four or five times a night. It's so uncomfortable. Can you help me?
Looking at the above code, we need to replace our own OPENAI_API_KEY to ensure that you have the permission for gpt-4-1106-preview, otherwise it will fail.
We are running this code to see how gpt-4-1106-preview responds and provides us with suggestions.
In fact, I used Python code to execute, referencing the following code:
The log of the operation is as follows:
notion image
This is the content of the request body. Below is the return value. As we can see, the gpt-4-1106-preview model was indeed used.
notion image
Now let's take a look at the answer generated for us. Do you think it's still okay?
The advice given seems to be quite practical and feasible.

Using JSON mode with gpt4 turbo

According to what OpenAI says in json-mode, we specify the model return value in the system message, but the model cannot It is guaranteed that a valid JSON object can be generated. In order to prevent errors, when calling gpt-4-1106-preview or gpt-3.5-turbo-1106, you can set response_format to { "type": " json_object" } to enable JSON mode. When JSON mode is enabled, the model is limited to generating strings that parse to valid JSON objects.
Let's continue the above code and make further modifications so that the return value is in JSON format.
Let's run it again and see the result:
You can use the https://www.json2.top/ website to parse and format the JSON content. By formatting the JSON, you can see that gpt-4-1106-preview can now translate the results it provides to you into JSON data format based on its understanding, which is very helpful for developers. Before this feature, only function calling could generate JSON format, so this is becoming increasingly beneficial for third-party developers.
With the response_format feature, you can generate other formats such as XML and JSON, making the functionality increasingly powerful.
Returning to the topic of JSON format, after enabling response_format={ "type": "json_object" }, you need to ensure that your request body includes the mention of a "json" string, otherwise an error will occur. Below is an error log I encountered 😂
The above describes the usage of enabling JSON with gpt-4-1106-preview. Of course, this is just a simple introduction. To truly harness the power of this JSON formatting feature, developers need to explore and experiment.