Building an AI-Driven Chat Application with .NET, Azure OpenAI, and Angular

Building an AI-Driven Chat Application with .NET, Azure OpenAI, and Angular
Building an AI-Driven Chat Application with .NET, Azure OpenAI, and Angular

In this article, we’ll walk through creating a conversational AI application powered by Azure OpenAI, a .NET 9 backend, and an Angular frontend. Users will be able to chat with the AI, upload documents, and receive real-time, AI-generated responses. By following this guide, you’ll learn how to:

  • Integrate Azure OpenAI for natural language responses
  • Handle file uploads and document analysis with Azure Blob Storage and Azure Form Recognizer
  • Stream AI responses in real-time for a dynamic user experience
  • Build a responsive chat interface using Angular and Angular Material

By the end, you’ll have a working application that demonstrates the capabilities of Azure’s AI services and shows how easily you can integrate them into your projects.

Backend Overview: .NET API with Azure OpenAI

Our backend uses .NET 9 and several Azure services:

  1. Azure OpenAI: Generates intelligent, AI-driven responses.
  2. Azure Blob Storage: Stores user-uploaded documents.
  3. Azure Form Recognizer: Analyzes documents (such as PDFs) to extract text and information.
  4. OpenXML SDK: Extracts text from Word documents.

The backend is responsible for:

  • Accepting user input and prompts
  • Uploading and analyzing documents
  • Fetching AI-generated responses from Azure OpenAI
  • Streaming these responses back to the client in real-time

1. Setting Up the Project

Create a New Project:

  dotnet new webapi -n GenAI.Api

Install the necessary packages for Azure AI, Blob Storage, and Document Analysis:

dotnet add package Azure.AI.OpenAI

dotnet add package Azure.Storage.Blobs

dotnet add package Azure.AI.FormRecognizer

dotnet add package DocumentFormat.OpenXml

2. Azure OpenAI Service Setup

Azure OpenAI provides the conversational AI capabilities. We’ll implement a method GetChatStreamCompletion that streams AI responses back to the client. Instead of waiting for the entire response, the client receives updates as they’re generated, making the experience feel more interactive.

Key Idea:

  • Send a prompt (user’s question) to Azure OpenAI.
  • Receive streaming chunks of text.
  • Write these chunks to the response stream in real-time.

csharpCopy code

OpenAI Service Code

In this step, you’ll implement the GetChatStreamCompletion method to stream responses to the user in real-time.

public async Task GetChatStreamCompletion(string prompt, Stream outputStream, ILogger logger)

{

    try

    {

        ChatClient chatClient = _azureClient.GetChatClient("gpt-4o");

        // Call the OpenAI API

        AsyncCollectionResult<StreamingChatCompletionUpdate> completionUpdates = chatClient.CompleteChatStreamingAsync(

            new[]

            {

                new UserChatMessage(prompt),

            });

        await foreach (StreamingChatCompletionUpdate completionUpdate in completionUpdates)

        {

            foreach (ChatMessageContentPart contentPart in completionUpdate.ContentUpdate)

            {

                byte[] data = System.Text.Encoding.UTF8.GetBytes(contentPart.Text);

                await outputStream.WriteAsync(data, 0, data.Length);

                await outputStream.FlushAsync(); // Ensure the chunk is sent to the client

            }

        }

        logger.LogInformation("Streaming completed successfully.");

    }

    catch (Exception ex)

    {

        logger.LogError(ex, "Error occurred while streaming the chat response.");

        throw;

    }

}

What’s Happening Here?

As each piece of the response is available, we write it immediately to the HTTP response, so the frontend sees the text appear in real-time.

We call the Azure OpenAI client to get a streaming response for the given prompt.

3. Document Intelligence Service

To handle uploaded documents (like PDFs), we’ll use Azure Form Recognizer to read and extract text. This lets your AI references actual document content when generating responses.

The `DocumentIntelligenceService` uses Azure Form Recognizer to read and analyze documents such as PDFs.

Document Intelligence Service Code

public async Task<string> ReadFile(string fileURL)

{

    string endpoint = _configuration.GetValue<string>("DocumentIntelligence:Endpoint");

    string apiKey = _configuration.GetValue<string>("DocumentIntelligence:ApiKey");

    AzureKeyCredential credential = new AzureKeyCredential(apiKey);

    DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);

    Uri fileUri = new Uri(fileURL);

    AnalyzeDocumentOperation operation = await client.AnalyzeDocumentFromUriAsync(WaitUntil.Completed, "prebuilt-read", fileUri);

    AnalyzeResult result = operation.Value;

    return result.Content;

}

Key Idea:

  • Extract and return the text for use in prompts.
  • Upload your document to Blob Storage.
  • Pass its URL to Form Recognizer.

4. Blob Service for File Uploads

Users can upload files (like PDFs or Word documents), which we store in Azure Blob Storage. We then use these files for text extraction and analysis.

Blob Service Code

public async Task<string> UploadAsync(Stream fileStream, string containerName, string fileName, string contentType)

{

    if (!IsContainerNameValid(containerName))

    {

        _logger.LogError($"Invalid Container Name: {containerName}.");

        throw new HttpStatusException(HttpStatusCode.BadRequest, $"Invalid Container Name: {containerName}");

    }

    BlobContainerClient containerClient = _blobServiceClient.GetBlobContainerClient(containerName);

    await containerClient.CreateIfNotExistsAsync();

    try

    {

        await containerClient.SetAccessPolicyAsync(PublicAccessType.None);

        var blob = containerClient.GetBlobClient(fileName);

        await blob.DeleteIfExistsAsync(DeleteSnapshotsOption.IncludeSnapshots);

        await blob.UploadAsync(fileStream, new BlobUploadOptions { HttpHeaders = new BlobHttpHeaders { ContentType = contentType } });

        var url = GetSASToken(blob);

        return url.ToString();

    }

    catch (Exception exception)

    {

        _logger.LogError($"Error While uploading file: {fileName} in container: {containerName}", exception.Message);

        throw new HttpStatusException(HttpStatusCode.BadRequest, exception.Message);

    }

}

Key Idea:

  • A SAS URL is generated so that Form Recognizer can access the file for analysis.
  • Uploads are stored securely in Blob Storage.

5. Controller Setup

The FormController ties everything together. It handles:

  • File Comparison Endpoint: Users upload multiple files, and the controller extracts text from each. It then sends this combined prompt to Azure OpenAI, which returns a summary or comparison.
  • Streaming Chat Endpoint: For simple text queries (without file uploads), the endpoint streams the AI’s response directly.

FormController.cs

using DocumentFormat.OpenXml.Packaging;

using GenAI.Api.Services;

using Microsoft.AspNetCore.Mvc;

using Azure.AI.FormRecognizer;

using Azure.AI.FormRecognizer.DocumentAnalysis;

using Azure;

namespace GenAI.Api.Controllers;

[ApiController]

[Route("api/[controller]")]

public class FormController : ControllerBase

{

    private readonly IBlobService _blobService;

    private readonly IDocumentIntelligenceService _documentIntelligenceService;

    private readonly IOpenApiService _openApiService;

    private readonly ILogger _logger;

    public FormController(IBlobService blobService,

        ILogger<FormController> logger,

        IDocumentIntelligenceService documentIntelligenceService,

        IOpenApiService openApiService)

    {

        _blobService = blobService;

        _logger = logger;

        _documentIntelligenceService = documentIntelligenceService;

        _openApiService = openApiService;

    }


    [HttpPost("compare")]

    public async Task CompareFiles([FromForm] List<IFormFile> files, [FromForm] string customPrompt = "Compare the texts and identify the differences.")

    {

        string prompt = string.Empty;

        if (files != null && files.Any())

        {

            // Iterate through the uploaded files and extract their text

            for (int i = 0; i < files.Count; i++)

            {

                var fileText = await ExtractTextFromFile(files[i]);

                prompt += $"\n\nDocument {i + 1}:\n{fileText}";

            }

        }

        // Combine the custom prompt with extracted file texts

        var finalPrompt = $"{customPrompt}{prompt}";

        Response.ContentType = "text/plain";

        Response.Headers.Add("Cache-Control", "no-cache");

        Response.Headers.Add("Transfer-Encoding", "chunked");

        // Send the prompt to OpenAI for streaming completion

        await _openApiService.GetChatStreamCompletion(finalPrompt, Response.Body, _logger);

    }

    [HttpGet("querystream")]

    [Produces("text/plain")]

    public async Task StreamChatResponse([FromQuery] string prompt)

    {

        Response.ContentType = "text/plain";

        Response.Headers.Add("Cache-Control", "no-cache");

        Response.Headers.Add("Transfer-Encoding", "chunked");

        try

        {

            await _openApiService.GetChatStreamCompletion(prompt, Response.Body, _logger);

        }

        catch (Exception ex)

        {

            _logger.LogError(ex, "Error occurred while streaming response.");

            Response.StatusCode = StatusCodes.Status500InternalServerError;

            await Response.Body.WriteAsync(System.Text.Encoding.UTF8.GetBytes("Error occurred while streaming."));

        }

    }

    private async Task<string> ExtractTextFromFile(IFormFile file)

    {

        using var memoryStream = new MemoryStream();

        await file.CopyToAsync(memoryStream);

        if (file.FileName.EndsWith(".pdf"))

        {

            string fileURL = await _blobService.UploadAsync(file.OpenReadStream(), "documents", file.FileName.AppendTimeStamp(), file.ContentType);

            return await _documentIntelligenceService.ReadFile(fileURL);

        }

        else if (file.FileName.EndsWith(".docx"))

        {

            using var wordDoc = WordprocessingDocument.Open(memoryStream, false);

            return wordDoc.MainDocumentPart.Document.Body.InnerText;

        }

        return string.Empty;

    }

}

How It Works:

  • Streams the response back to the user.
  • User uploads multiple files.
  • The controller extracts their text (using Blob + Form Recognizer or OpenXML).
  • Combines this text with a user-specified prompt.
  • Calls Azure OpenAI to compare the files or generate insights.

Frontend Overview: Angular + Angular Material

The frontend is built with Angular and leverages Angular Material for a polished UI and ngx-markdown to display markdown-formatted AI responses. The frontend:

  • Uses a user-friendly interface with a text area, file picker, and a nicely styled chat window.
  • Allows users to type messages and upload files.
  • Sends these inputs to the backend.
  • Displays the AI’s responses as they arrive.

1. Setting Up the Angular Project

Create a New Angular Project: Use the Angular CLI to set up the project:

 ng new simplechat

Install Dependencies: Add Angular Material for UI components and `ngx-markdown` for rendering markdown responses.

ng add @angular/material

npm install ngx-markdown

2. Chat Service

The ChatService communicates with the .NET API. It sends user prompts and files to the /compare endpoint.

chat.service.ts

import { Injectable } from '@angular/core';

import { HttpClient } from '@angular/common/http';

import { Observable } from 'rxjs';

import { environment } from '../environments/environment';

@Injectable({

  providedIn: 'root'

})

export class ChatService {

  private apiUrl = `${environment.apiUrl}/Form`;

  constructor(private http: HttpClient) { }

  sendMessage(message: string, files?: File[]): Observable<string> {

    const formData = new FormData();

    formData.append('customPrompt', message);

    if (files && files.length > 0) {

      files.forEach((file, index) => {

        formData.append(`file${index}`, file);

      });

    }

    return this.http.post<string>(`${this.apiUrl}/compare`, formData);

  }

}

3. Input Component (FormFieldComponent)

This component manages the user’s input area and file uploads.

  • A send button to submit the prompt and files to the backend.
  • A text area for the message.
  • A button to upload files.

form-field.component.html

<div class="file-list">

  <div *ngFor="let file of selectedFiles" class="file-item">

    <img *ngIf="file.type.startsWith('image/')" [src]="file.name" alt="{{ file.name }}" class="thumbnail" />

    <mat-icon *ngIf="!file.type.startsWith('image/')">insert_drive_file</mat-icon>

    <span>{{ file.name }}</span>

  </div>

</div>

<form class="input-container">

  <mat-form-field appearance="fill">

    <textarea matInput type="text" [formControl]="control" required></textarea>

  </mat-form-field>

  <input type="file" #fileInput (change)="handleFileInput($event)" multiple style="display: none" />

  <button mat-icon-button (click)="handleSend()"><mat-icon>send</mat-icon></button>

  <button mat-icon-button (click)="fileInput.click()">

    <mat-icon>attach_file</mat-icon>

  </button>

</form>

form-field.component.ts

import { CommonModule } from "@angular/common";

import { Component, EventEmitter, Input, Output } from '@angular/core';

import { FormControl, FormsModule, ReactiveFormsModule } from "@angular/forms";

import { MatButtonModule } from "@angular/material/button";

import { MatFormFieldModule } from "@angular/material/form-field";

import { MatIconModule } from "@angular/material/icon";

import { MatInputModule } from "@angular/material/input";

@Component({

  selector: 'app-form-field',

  standalone: true,

  imports: [CommonModule, FormsModule, ReactiveFormsModule, MatFormFieldModule, MatInputModule, MatButtonModule, MatIconModule],

  templateUrl: './form-field.component.html',

  styleUrls: ['./form-field.component.css']

})

export class FormFieldComponent {

  @Input() control!: FormControl;

  @Output() nextStepEvent = new EventEmitter();

  selectedFiles: File[] = [];

  handleSend() {

    const message = this.control.value;

    if (message || this.selectedFiles.length > 0) {

      this.nextStepEvent.emit({ message, files: this.selectedFiles });

      this.control.reset();

      this.selectedFiles = [];

    }

  }

  handleFileInput(event: Event) {

    const input = event.target as HTMLInputElement;

    if (input.files && input.files.length > 0) {

      this.selectedFiles = Array.from(input.files);

    }

  }

}

4. Chat Component

The ChatComponent handles:

  • Displaying the conversation history.
  • Showing loading indicators while waiting for AI responses.
  • Rendering AI responses as they stream in.
  • Auto-scrolling so the user always sees the latest message.

The chat component integrates with FormFieldComponent and uses ngx-markdown to nicely format AI responses.

chat.component.html

<div class="container">

  <div class="message-container" #scrollMe>

    <div

      class="message"

      *ngFor="let message of messages"

      [class.user-message]="message.user"

      [class.system-message]="!message.user"

      [@slideIn]="!message.user ? 'in' : null"

    >

      <div class="bubble" *ngIf="!message.user">

        <markdown

          clipboard

          [data]="message.content"

          lineNumbers

          [start]="5"

        ></markdown>

      </div>

      <div class="bubble" *ngIf="message.user" [innerHTML]="message.content">

      </div>

    </div>

  </div>

  <div *ngIf="isLoading" class="loader-container">

    <mat-progress-spinner mode="indeterminate" diameter="50"></mat-progress-spinner>

  </div>

  <div class="input-container">

    <app-form-field

      [control]="form.controls.message"

      (nextStepEvent)="handleNewInfo($event)"

    ></app-form-field>

  </div>

</div>

chat.component.ts

import {

  Component,

  ElementRef,

  ViewChild,

} from '@angular/core';

import { CommonModule } from '@angular/common';

import {

  FormBuilder,

  FormControl,

  FormsModule,

  ReactiveFormsModule,

  Validators,

} from '@angular/forms';

import { ChatService } from '../chat.service';

import { MatFormFieldModule } from '@angular/material/form-field';

import { MatInputModule } from '@angular/material/input';

import { MatButtonModule } from '@angular/material/button';

import { FormFieldComponent } from '../form-field/form-field.component';

import { MarkdownComponent } from 'ngx-markdown';

import { MatProgressSpinnerModule } from '@angular/material/progress-spinner';

import {

  trigger,

  style,

  transition,

  animate,

} from '@angular/animations';

interface Message {

  content: string;

  user: boolean;

}

@Component({

  selector: 'app-chat',

  standalone: true,

  imports: [

    CommonModule,

    MarkdownComponent,

    FormsModule,

    ReactiveFormsModule,

    MatFormFieldModule,

    MatInputModule,

    MatButtonModule,

    FormFieldComponent,

    MatProgressSpinnerModule,

  ],

  templateUrl: './chat.component.html',

  styleUrls: ['./chat.component.css'],

  animations: [

    trigger('slideIn', [

      transition(':enter', [

        style({ transform: 'translateX(-100%)', opacity: 0 }),

        animate('500ms ease-out', style({ transform: 'translateX(0)', opacity: 1 })),

      ]),

    ]),

  ],

})

export class ChatComponent {

  isLoading: boolean = false;

  @ViewChild('scrollMe', { static: false }) scrollFrame: ElementRef | undefined;

  private scrollContainer: any;

  messages: Message[] = [];

  form: any;

  constructor(private fb: FormBuilder, private chatService: ChatService) {

    this.form = this.fb.group({

      message: new FormControl('', Validators.required),

    });

  }

  ngAfterViewInit(): void {

    this.scrollContainer = this.scrollFrame!.nativeElement;

  }

  addMessage(content: string, user: boolean) {

    if (!user && this.messages.length > 0 && !this.messages[this.messages.length - 1].user) {

      this.messages[this.messages.length - 1].content += content;

    } else {

      this.messages.push({ content, user });

    }

    this.scrollToBottom();

  }

  handleNewInfo(event: { message: string; files?: File[] }) {

    this.isLoading = true;

    const { message, files } = event;

    let finalMessage = '';

    if (files && files.length > 0) {

      files.forEach((file) => {

        finalMessage += file.name + '<br>';

      });

      finalMessage += '<hr>';

    }

    if (message) {

      finalMessage += message;

      this.addMessage(finalMessage, true);

    }

    this.chatService.sendMessage(this.form.value.message, files).subscribe({

      next: (chunk: any) => {

        this.addMessage(chunk, false);

        this.isLoading = false;

      },

      error: (err: any) => {

        console.error(err);

        this.isLoading = false;

      },

      complete: () => {

        console.log('Streaming complete');

      },

    });

  }

  scrollToBottom(): void {

    if (this.scrollContainer) {

      this.scrollContainer.scroll({

        top: this.scrollContainer.scrollHeight,

        left: 0,

        behavior: 'smooth',

      });

    }

  }

}

5. App Component

The `AppComponent` serves as the root component that hosts the `ChatComponent`.

app.component.html

<div class="container">

  <app-chat></app-chat>

</div>

app.component.ts

import { Component } from '@angular/core';

import { ChatComponent } from "./chat/chat.component";

@Component({

  selector: 'app-root',

  standalone: true,

  imports: [ChatComponent],

  templateUrl: './app.component.html',

  styleUrls: ['./app.component.css']

})

export class AppComponent {

  title = 'simplechat';

}

Bringing It All Together

Flow of the Application:

  1. User enters a prompt or uploads files in the Angular UI.
  2. The ChatComponent sends the prompt and files to the backend via the ChatService.
  3. The backend:
    • Uploads files to Blob Storage (if any).
    • Extracts text using Form Recognizer or OpenXML.
    • Combines extracted text with the prompt.
    • Calls Azure OpenAI for a streamed response.
  4. The Angular frontend receives chunks of the AI response in real-time and displays them in the chat window.

Conclusion and Next Steps

In this article, we built a foundation for an AI-driven chat application:

  • Backend (.NET): Integrated with Azure OpenAI, Blob Storage, and Form Recognizer to generate, analyze, and return AI responses.
  • Frontend (Angular): Provided a user-friendly interface to send prompts, upload documents, and display real-time responses.

This base can be extended and customized. For example:

  • Add user authentication.
  • Enhance document analysis with more advanced models.
  • Implement persistent chat history using Azure CosmoDB and also maintain the chat context to Azure Open AI.

In the next article, we’ll explore maintaining chat history while using Azure OpenAI APIs. Until then, consider this guide a starting point for integrating AI capabilities into your own applications.

Complete Code on GitHub:

https://github.com/nitin27may/AzureOpenAi-Chatub

Get Involved!

  • Try the Code: Test the examples provided and share your results in the comments.
  • Follow Us: Stay updated by following us on GitHub.
  • Subscribe: Sign up for our newsletter to receive expert Azure development tips.
  • Join the Conversation: What challenges have you faced with Azure Open AI? Share your experiences in the comments below!

Discover more from Nitin Singh

Subscribe to get the latest posts sent to your email.

Leave a Reply