Skip to content

Commit

Permalink
Merge pull request #8 from rturknett/readMe-updates
Browse files Browse the repository at this point in the history
Read me updates
  • Loading branch information
rturknett authored Oct 13, 2023
2 parents 1d3a2f4 + e42b4e2 commit 188ec7a
Show file tree
Hide file tree
Showing 47 changed files with 468 additions and 1,209 deletions.
66 changes: 66 additions & 0 deletions Extensibility.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
**Future Extensibility** 

The current solution focuses mainly on post-call audio and text
conversation knowledge mining. There are a number of additional
capabilities and considerations that would complement this accelerator and
are areas for extending: 

1. **Improved system of aggregate insight prompts** 
> The accelerator utilizes system prompts that are used to derive
> insights from a conversation. These are customizable within the
> accelerator configuration, but additional insights may be required
> based on new use cases for this accelerator, which will require
> changes to the existing prompts or additional prompts to be
> created. In addition, new techniques or approaches to the prompt
> structure may be made to enhance the accuracy and relevancy,
> especially as new versions of GPT are released.  
2. **Q&A style interface for users to ask questions across all indexed conversations**
> In the current accelerator, a keyword search via Azure Cognitive
> Search is the only way to filter and find conversations relevant
> to their inquiry. Using a Q&A approach would unlock a
> conversational approach and follow up questions. This could also
> be extended to allow users to derive insights on their own,
> without pre-processing across the full dataset. 
3. **Power BI dashboard to interact with index tags and insights** 
> As insights are derived and aggregated, a common approach is to
> pull these data points into a larger dashboard that integrates
> with additional systems and data. This could include call center
> specific dashboards and would allow these insights to be derived from AI
> to show the full picture with more standard call center KPIs.
> Given this solution is an example of how to mine these datapoints,
> a dashboard similar to this would be a common integration in a
> real-world implementation. 
4. **Historical reprocessing workflow for all conversations to aggregate new insights as they're authored** 
> It's common for additional insights or enhancements to prompts to be
> made over time, but as these changes are made the existing
> conversations will need to be reprocessed. Currently, the solution
> does not have a "reprocess" capability and would require the
> conversations to be removed and then readded. For ease of use, a
> reprocessing capability could be added to simply rerun the skills on
> the index either on a regular cadence, or manually running across the
> full index. This ensures most up-to-date insights and capabilities
> within the mined knowledgebase. 
5. **Pre-processing mechanism for simple TXT conversions to the required JSON format** 
> Text conversations are currently required to be in a single JSON
> format, but a pre-processing tool could be created to convert a simple
> text file into the correct JSON format. 
6. **Video and other conversation formats** 
> The solution only accepts text and audio based conversations, but this
> could be extended to support additional formats as well -- including
> video and images sets. This extends the ability to support additional
> channels customer service and users interact through, still mining the
> same knowledge insights into a single system. 
7. **Opt-out flag on conversations to skip knowledge mining processing**
> Privacy and legal requirement around data processing is a
> consideration that all data processing platforms should consider.
> This solution can be extended to support an explicit parameter or
> flag on conversations that users have opted out or in to their
> conversation being processed by ML and AI. The solution can then
> decide to skip over specific conversations where users have opted
> out. 
4 changes: 1 addition & 3 deletions GBB.ConversationalKM.Python/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
# GBB Conversational Knowledge Mining Architecture

![alt text](../images/conversationalkm_architecture.PNG "Conversational Knowledge Mining Architecture")
# Conversational Knowledge Mining Architecture

PREPROCESSING:

Expand Down
3 changes: 1 addition & 2 deletions GBB.ConversationalKM.Python/TelemetryDataExtractor/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,4 @@ The `BlobTrigger` makes it incredibly easy to react to new Blobs inside of Azure
For a `BlobTrigger` to work, you provide a path which dictates where the blobs are located inside your container, and can also help restrict the types of blobs you wish to return. For instance, you can set the path to `samples/{name}.png` to restrict the trigger to only the samples path and only blobs with ".png" at the end of their name.

## Learn more

<TODO> Documentation
[Click to learn more about BlobTriggers for Azure Functions](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger?tabs=python-v2%2Cisolated-process%2Cnodejs-v4&pivots=programming-language-python)
2 changes: 1 addition & 1 deletion GBB.ConversationalKM.Python/build-search-infra/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ For a `TimerTrigger` to work, you provide a schedule in the form of a [cron expr

## Learn more

<TODO> Documentation
[Click to learn more about TimerTriggers for Azure Functions](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer?tabs=python-v2%2Cisolated-process%2Cnodejs-v4&pivots=programming-language-python)
31 changes: 22 additions & 9 deletions GBB.ConversationalKM.WebUI/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ The Cognitive Search Template contains a .NET Core MVC Web app used as a Templat

In just a few steps, you can configure this template UI to query your search index. This template will render a web page similar to the following:

![web user interface](../images/WebUI.jpg)
![web user interface](/images/readMe/image2.png)

## Prerequisites

Expand Down Expand Up @@ -38,9 +38,10 @@ This file contains a mix of required and optional fields described below.
4. **SearchIndexerName** - The name of your Azure Cognitive Search indexer
5. **StorageAccountName** - The name of your Azure Blob Storage Account
6. **StorageAccountKey** - The key for your Azure Blob Storage Account
7. **StorageContainerAddress** - The URL to the storage container where your documents are stored. This should be in the following format: *https://*storageaccountname*.blob.core.windows.net/*containername**
8. **KeyField** - They key field for your search index. This should be set to the field specified as a key document Id in the index. By default this is *metadata_storage_path*.
9. **IsPathBase64Encoded** - By default, metadata_storage_path is the key, and it gets base64 encoded so this is set to true by default. If your key is not encoded, set this to false.
7. **StorageContainerAddress** - The URL to the storage container where your audio files are stored. This should be in the following format: *https://*storageaccountname*.blob.core.windows.net/*containername**
8. **StorageContainerAddress2** - The URL to the storage container where your documents are stored. This should be in the following format: *https://*storageaccountname*.blob.core.windows.net/*containername**
9. **KeyField** - They key field for your search index. This should be set to the field specified as a key document Id in the index. By default this is *metadata_storage_path*.
10. **IsPathBase64Encoded** - By default, metadata_storage_path is the key, and it gets base64 encoded so this is set to true by default. If your key is not encoded, set this to false.

### Optional Fields

Expand All @@ -51,7 +52,6 @@ While some fields are optional, we recommend not removing them from *appsettings
"InstrumentationKey": "",

// Optional container addresses if using more than one indexer:
"StorageContainerAddress2": "https://{storage-account-name}.blob.core.windows.net/{container-name}",
"StorageContainerAddress3": "https://{storage-account-name}.blob.core.windows.net/{container-name}",

// Optional key to an Azure Maps account if you would like to display the geoLocation field in a map
Expand All @@ -72,7 +72,6 @@ While some fields are optional, we recommend not removing them from *appsettings
1. **InstrumentationKey** - Optional instumentation key for Application Insights. The instrumentation key connects the web app to Application Inisghts in order to populate the Power BI reports.
2. **StorageContainerAddress2** & **StorageContainerAddress3** - Optional container addresses if using more than one indexer
3. **AzureMapsSubscriptionKey** - You have the option to provide an Azure Maps account if you would like to display a geographic point in a map in the document details. The code expects a field called *geolocation* of type Edm.GeographyPoint. If your wish to change this behavior (for instance if you would like to use a different field), you can modify details.js.
![geolocation](../images/geolocation.png)
4. **GraphFacet** - The GraphFacet is used for generating the relationship graph. This can now be edited in the UI.
5. **Customizable** - Determines if user is allowed to *customize* the web app. Customizations include uploading documents and changing the colors/logo of the web app. **OrganizationName**, **OrganizationLogo**, and **OrganizationWebSiteUrl** are additional fields that also allow you to do light customization.

Expand All @@ -82,8 +81,6 @@ At this point, your web app is configured and is ready to run. By default, all f

If you would like to further customize the UI, you can update the following fields in *Search\SearchModel.cs*. You can select the filters that you are able to facet on, the tags shown with the results, as well as the fields returned by the search.

![searchmodel](../images/SearchModel.png)

**Facets** - Defines which facetable fields will show up as selectable filters in the UI. By default all facetable fields are included.

**Tags** - Defines which fields will be added to the results card and details view as buttons. By default all facetable fields are included.
Expand All @@ -96,7 +93,23 @@ This template serves as a great baseline for a Cognitive Search solution, howeve

We have a special behavior if you have a field called *translated_text*. The UI will automatically show the original text and the translated text in the UI. This can be handy. If you would like to change this behavior (disable it, or change the name of the field), you can do that at details.js (GetTranscriptHTML method).

![geolocation](../images/translated.png)
## 4. How do I run this locally?
1. To run the UI locally you must install .NET Core 3.1 (NOTE: this
is not the latest version and must be installed explicitly)

2. Open the GBB.ConversationalKM.sln file in Visual Studio

3. Click the play button with IIS
Express![image](/images/Troubleshooting/image.png)

4. On first launch on local host type thisissafe on keyboard anywhere on the page
![image](/images/Troubleshooting/image2.png)

## 5. How do I deploy my local changes?
1. To automatically deploy changes when pushing to GitHub you must enable GitHub actions
2. Create a container registry in Azure
3. Store your container credentials in GitHub secrets
4. You now should be able to utilize docker-image-web-ui.yml workflow to deploy recent master branch changes to your container

### Key Files

Expand Down
Loading

0 comments on commit 188ec7a

Please sign in to comment.