Exploring the Power of Transformer Models in Hospitality Domain

Authors

  • Jyoti Parsola

DOI:

https://doi.org/10.17762/msea.v70i1.2314

Abstract

ABSTRACT

Despite decades of medical advancements and a rising interest in precision healthcare, the great majority of diagnoses are made after patients start to exhibit observable symptoms of sickness. However, early disease indication and detection can give patients and caregivers the opportunity for early intervention, better disease management, and effective use of healthcare resources. Deep learning and other recent advancements in machine learning provide a fantastic chance to fill this unmet demand. Transformer designs are very expressive because they encode long-range relationships in the input sequences via self-attention methods. The models we offer in this work are Transformer-based (TB), and we provide a thorough description of each one in contrast to the Transformer's typical design. This study focuses on text-based task (TB) models used in Natural Language Processing (NLP). An examination of the key ideas at the core of the effectiveness of these models comes first.  NLP's flexible architecture allows it to incorporate various heterogeneous concepts (such as diagnoses, treatments, measurements, and more) to further improve the accuracy of its predictions. Its (pre-)training results in disease and patient representations can also be helpful for future studies (i.e., transfer learning).

Downloads

Published

2021-01-31

How to Cite

Parsola, J. . (2021). Exploring the Power of Transformer Models in Hospitality Domain. Mathematical Statistician and Engineering Applications, 70(1), 324–330. https://doi.org/10.17762/msea.v70i1.2314

Issue

Section

Articles