Commit 11785997 authored by  Joel  Oksanen's avatar Joel Oksanen

System architecture and ADA app write up done

parent d72dd740
// ProductView.swift
// ADAbot
// Created by Joel Oksanen on 12.6.2020.
// Copyright © 2020 Joel Oksanen. All rights reserved.
import SwiftUI
struct ProductView: View {
@ObservedObject var connectionManager: ConnectionManager
var body: some View {
VStack(spacing: 0) {
ProductView(connectionManager: connectionManager)
ChatView(connectionManager: connectionManager)
// SearchView.swift
// ADAbot
// Created by Joel Oksanen on 12.6.2020.
// Copyright © 2020 Joel Oksanen. All rights reserved.
import Foundation
This diff is collapsed.
......@@ -47,7 +47,7 @@ For both 1.\ and 2.\ we use BERT, a language model proposed by Devlin et al.\ \c
for tree = {l=2cm}
for tree = {draw, l=2.5cm}
......@@ -55,15 +55,16 @@ For both 1.\ and 2.\ we use BERT, a language model proposed by Devlin et al.\ \c
[\dots, draw=none, minimum width=2cm]
......@@ -16,6 +16,7 @@
\chapter{ADA System}
In this chapter, we introduce the system architecture of our ADA implementation, as well as an interactive front-end application based on the \textit{Botplication} design principles proposed by Klopfenstein et al.\ \cite{RefWorks:doc:5e395ff6e4b02ac586d9a2c8}. We conclude by evaluating the performance and usability of our system.
The full ADA system architecture diagram is shown in Figure \ref{fig:ada_architecture}. We will refer to this diagram in the following sections.
\caption{The full ADA system architecture}
The back-end of the system consists of three distinct processes:
\item Ontology extraction,
\item QBAF extraction, and
\item a \textit{conversational agent} that handles interaction with the front-end application.
Data flows on the diagram from left to right: the ontology for cameras is used to extract a QBAF for the Canon EOS 200D camera model, and the QBAF for the model is used in the conversational agent to communicate information about the model to the user via the front-end application.
While the conversational agent interacts with the user in real-time, the ontology and QBAF extraction processes run autonomously, and interact with the rest of the system only through their respective databases where they store the extracted data. This is the case for two reasons:
\item The ontology and QBAF extraction processes mine information from a large number of reviews, which takes a substantial amount of time. Therefore, implementing these processes in real-time would lead to unacceptable delays for the user of the system, who expects fluid interaction with the system.
\item The extracted ontology and QBAF data is often used by multiple processes: the same ontology for cameras can be used to extract a QBAF for any camera model, and the same QBAF for a particular model can be used in conversations about the model with a number of different users. Therefore, extracting an ontology or QBAF from scratch each time waste a lot of computing power.
\subsubsection{Ontology extraction process}
The ontology extraction process uses Amazon user reviews to extract ontologies for product categories with the method detailed in Chapter \ref{chap:ontology}. As each ontology requires mining thousands of review texts (around 30,000 reviews was deemed sufficient in Section \ref{sec:ontology_eval}), extracting ontologies for each of the thousands of product categories on Amazon requires a lot of computing power. However, once an ontology for a product category such as cameras has been extracted, there is no need to update the ontology for a while assuming it is accurate. Although the composition or meaning of products can change over time, for most product categories, any changes usually happen slowly over the course of several years. Therefore, we propose that an ontology is initially extracted once for each product category, after which a background process can update the ontologies for categories with lots of new products when needed.
\subsubsection{QBAF extraction process}
The QBAF extraction process uses Amazon user reviews, the extracted ontologies, and the sentiment analysis method detailed in Chapter \ref{chap:sa} to extract QBAFs for product models with the method detailed in Section \ref{sec:ADA_bg}. As for ontology extraction, the initial extraction of QBAFs for all Amazon products is a costly process. However, unlike for the ontology extraction, it is important that the QBAF for a product is updated for each new review for it, in order for the explanations to accurately reflect the entire set of reviews. Therefore, the QBAF extraction requires a continuous background process. Note, however, that this background process is not as expensive as the initialisation of the QBAFs, as the computationally heavy tasks of feature detection and sentiment analysis will only have to be performed for the new review instance.
\subsubsection{Conversational agent}
The conversational agent is responsible for the dialogue between the user and system. The user can initiate a conversation by requesting information about a particular product on the front-end application. The conversational agent then loads the QBAF for the product from the QBAF database, which it uses to direct the conversation. For each query from the user, the agent returns both a response for the query, as well as options for follow-up questions about the entities mentioned in its response. By only allowing the user to select from a pre-defined set of query options, the agent guides the conversation so that it stays in the familiar domain of the argumentation dialogue detailed in Section \ref{sec:dialogue}.
Multiple users can use the system at the same time, so the agent processes each request on its own thread in order to minimise the response time. The agent keeps track of the conversations by assigning each user a unique identifier.
\subsection{iOS Botplication}
Based on the evaluation of conversational methods in Section \ref{sec:conv_eval},
we chose to implement a Botplication front-end for ADA, which interacts with the back-end via a network connection to the conversational agent. Figure \ref{fig:mixer_screenshots} shows three screenshots of the application.
The first screenshot \ref{fig:mixer1} shows a simple product selection screen, which the user can use to browse products. As our resources our limited, we cannot mine ontologies and QBAFs for the whole set of Amazon products, so the product selection screen displays only a small selection of products; a fully developed system would include a product search functionality.
Once the user has selected a product they are interested in, ADA initiates the conversation by asking the user what they would like to know about the product. The user can tap on any of ADA's messages to reveal a set of possible questions determined by the argumentation dialogue. The subjects of these questions are the arguments mentioned in the message, which are highlighted in bold. For example, in \ref{fig:mixer2}, the user can ask about either the mixer, the motor, or the bowl, for which two query options are presented.
An example of a short conversation between the user and ADA is shown in \ref{fig:mixer3}. The conversation starts from a general view of the reviewers' sentiment towards the product, and from there delves deeper into more specific aspects of the product by utilising its ontology. Through this conversation, the user not only gains a better understanding of why the product was highly rated, but possibly also discovers more about the importance of various aspects of the product, which supports the \textit{user revealment} property introduced in Section \ref{sec:conv_eval}. To explore various aspects of the product, the user can at any point return to a previous point in the conversation by tapping on a previous message, which is one key advantage of the message-based Botplication design.
\caption{Product selection screen}
\caption{User controls}
\caption{Example of a conversation}
\caption{Screenshots of the ADA botplication}
\subsection{iOS botplication}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment