Join our Discord server for community support and discussions Icon description

Official OpenAI Library for .NET – Sample App is here!

With the official release of the OpenAI library for .NET, integrating powerful language models from OpenAI directly into our .NET applications has become a seamless experience. This library offers comprehensive support for the entire OpenAI API, including Assistants v2, Chat Completions, GPT-4, and both synchronous and asynchronous APIs. It also provides access to streaming completions via IAsyncEnumerable<T>.

So, what better way to explore this new library than by updating our existing ChatGPT sample. In this blog, we’ll walk through the process of integrating the OpenAI library, highlight some key differences from previous implementations, and share practical examples to help you make the most of the official OpenAI library for .NET 

Let’s dive in and explore the updates!

Note:​

This blog focuses solely on the new implementation of the official OpenAI library. For a comprehensive tutorial on building a ChatGPT-enabled chat application, including C# markup for the UI, the Uno.Extensions.Configuration package, and MVUX, please refer to the original blog.

Integration with OpenAI Services

Integrating OpenAI services into Uno Platform applications opens up a world of possibilities for creating intelligent and interactive experiences. With the official OpenAI NuGet package, developers can access OpenAI services such as ChatGPT and DALL-E with ease.

To begin the integration we can register a `ChatClient` instance as a singleton in the Dependency Injection container. In the App.xaml.cs file, add the following code to the `ConfigureServices` section inside the `OnLaunched` method:

				
					var section = context.Configuration.GetSection(nameof(AppConfig));
var apiKey = section[nameof(AppConfig.ApiKey)];

services.AddSingleton(new ChatClient("gpt-3.5-turbo", apiKey));
				
			

Then we can set up our `ChatService` class, where we add a constructor that takes a `ChatClient` as a parameter:

				
					public ChatService(ChatClient client)
{
    _client = client;
}

				
			
  1. Once that is done, we can start working on implementing the methods. We are particularly interested in two methods of the `ChatClient` class:

    – `CompleteChatAsync` (which takes a request as a parameter and returns a `ChatCompletion` object with the AI response) and

    `CompleteChatStreamingAsync` (which takes a request as a parameter and asynchronously returns the AI response as it gets generated).

Let’s see how to implement the `AskAsync` method, which will be responsible for sending the request and receiving the chat’s response. This method takes a `ChatRequest` record as parameters and returns a `ChatResponse` record. Inside the method, a conversion from the chat history to a `ChatMessage` array is made so that we can send the request. Then, the `CompleteChatAsync` method of the client is called to get a `ChatCompletion` result that contains the Chat response.

The method then evaluates the
`FinishReason` of the result to determine the appropriate `ChatResponse`. If the completion is stopped normally, a `ChatResponse` with the result’s string representation is returned. If the completion is stopped due to any other reason an error message is returned.

				
					public async ValueTask<ChatResponse> AskAsync(ChatRequest chatRequest, CancellationToken ct = default)
{
    try
    {
        //Convert the History message to a `ChatMessage` array
        var request = ToCompletionRequest(chatRequest);


        //Make the request sending the history to the AI
        ChatCompletion result = await _client.CompleteChatAsync(request);


        return result.FinishReason switch
        {
            ChatFinishReason.Stop => new ChatResponse(result.ToString()), // There was no error so return the AI message to user
            ChatFinishReason.Length => new ChatResponse("Incomplete model output due to MaxTokens parameter or token limit exceeded.", IsError: true),
            ChatFinishReason.ContentFilter => new ChatResponse("Omitted content due to a content filter flag.", IsError: true),
            _ => new ChatResponse(result.FinishReason.ToString())
        };
    }
    catch (Exception ex)
    {
        return new ChatResponse($"Something went wrong: {ex.Message}", IsError: true);
    }
}

				
			

Now let’s implement the `AskAsStream` method. The main difference from this method to the `AskAsync` is that this method will return the message as it gets generated by the AI, instead of waiting for the whole message to be generated. Similar to the `AskAsync` this method takes a `ChatRequest` object as parameters and returns an asynchronous stream of `ChatResponse` objects. Initially, a conversion from the chat history to a `ChatMessage` array is made so that we can send the request. Then, the `CompleteChatStreamingAsync` method of the client is called and an enumerator is obtained from its result to iterate through the streaming updates. Within the `while` loop, the method iterates through the updates, appending the text from each `ContentUpdate`, which has the new part of the response, to a `StringBuilder`. The `ChatResponse` object is updated with the current content after each update. If an exception occurs while processing an update, the response is updated to indicate an error with the exception message. The current `ChatResponse` is yielded after each update so that the UI can be updated accordingly.

				
					public async IAsyncEnumerable<ChatResponse> AskAsStream(ChatRequest chatRequest, [EnumeratorCancellation] CancellationToken ct = default)
{
    //Convert the History message to a `ChatMessage` array
    var request = ToCompletionRequest(chatRequest);


    var response = new ChatResponse();
    var content = new StringBuilder();


    IAsyncEnumerator<StreamingChatCompletionUpdate>? responseStream = default;


    while (!response.IsError)
    {
        try
        {
            //Make the request sending the history to the AI
            responseStream ??= _client.CompleteChatStreamingAsync(request).GetAsyncEnumerator(ct);


            //Check if AI is still sending responses
            if (await responseStream.MoveNextAsync())
            {
                foreach (var updatePart in responseStream.Current.ContentUpdate)
                {
                    //Concatenate the new part of the response with the existing response
                    content.Append(updatePart.Text);
                }


                //Update the response record with the new part of the response
                response = response with { Message = content.ToString() };
            }
            else
     {
                //Break the loop if the response is complete
                yield break;
            }
        }
        catch (Exception ex)
        {
            response = response with { Message = $"Something went wrong: {ex.Message}", IsError = true };
        }


        //Return the updated response record so that the UI can be updated
        yield return response;
    }
}    
				
			

Users have the option to input a message to provide context for ChatGPT, guiding its conversation or behavior. For instance, they can provide background information or specific topics of interest. In our sample, we use this context: “You are Uno ChatGPT Sample, a helpful assistant helping users learn more about how to develop using Uno Platform.” ChatGPT can adopt a particular persona or focus its responses accordingly. For example, users could input lines like “You are Borat, a clueless journalist from Kazakhstan” or “You are Buzz Lightyear, a space ranger on a mission to infinity and beyond,” allowing ChatGPT to respond in character or tailor its answers to match the chosen persona.

To ensure that this setup functions correctly, it’s essential to remember to register our `ChatService` as a singleton in the Dependency Injection container.

				
					var section = context.Configuration.GetSection(nameof(AppConfig));
var apiKey = section[nameof(AppConfig.ApiKey)];

services.AddSingleton(new ChatClient("gpt-3.5-turbo", apiKey))
        .AddSingleton<IChatService, ChatService>();
				
			

The `ChatService` acts as a bridge between the model and OpenAI services, allowing developers to incorporate AI-driven functionalities into their applications.

For more information, please see our Dependency Injection docs.

Next Steps

If you are new to Uno Platform, install the Uno Platform extension and follow the beginner-oriented Counter App or Simple Calc tutorial. Both are doable during a coffee break. For more advanced topics, use our Tube Player tutorial.

Tags:

Related Posts

Uno Platform 5.2 LIVE Webinar – Today at 3 PM EST – Watch