Conversational systems have seen a significant rise in demand due to modern commercial applications using these types of methods such as Amazon’s Alexa, Apple’s Siri and Google Assistant. Multimodal chatbots is a growing area of research, where users and the conversational agent communicate by natural language and visual data.
The Farfetch Chat R&D project proposes to deliver a new generation of task-oriented conversational agents that interact and support users seamlessly using verbal and visual information. This innovative project will allow interaction with consumers through a multimodal interface, providing an experience close to the one you have in a physical store.
Keywords: Conversational Agents · Machine Learning · E-Commerce
Farfetch Chat R&D aims to mimic the fashion concierge that understands the customer needs and provide the correct answers leveraging the vast textual and visual data and knowledge coded in training, descriptions, and accumulated sequences of past experiences with a massive number of users. Soon, most of the interaction between large organizations and their users will be mediated by AI agents. This perception is becoming undisputed as online shopping dominates entire market segments, and the new “digitally-native” generations become consumers.
Farfetch Chat R&D proposes research and delivers a new generation of task-oriented conversational agents that interact with users seamlessly using verbal and visual information. IFetch must provide targeted advice and a “physical store-like” experience through the conversation while maintaining user engagement.
The Farfetch Chat R&D project expects to revolutionize the whole online shopping experience by changing how customers access information and make shopping decisions.
Promoter:
FARFETCH – Ricardo G. Sousa
Academic Co-promoters:
FCT NOVA – João Magalhães
Instituto Superior Técnico – João Paulo Costeira
CMU:
Language Technologies Institute – Alexander Hauptmann
The project ambition is to develop research on technology that will make an impact on the future. To fulfil this ambition, this project will address two critical research challenges: