Deep data analysis for latent information prediction has become an increasingly important area of research. In order to predict users' interests and other latent attributes, most of the existing solutions (works, studies) have used textual data and have obtained accurate results. However, little attention has been paid to visual data that have become increasingly popular in recent years. This paper addresses the problem of discovering the attributed interests and analyzing the performance of the automatic prediction by a comparison with the self-assessed topics of interests (topics of interests provided by the user in a proposed questionnaire) based on data analysis techniques applied to users' visual data. We analyze the content of individual images and aggregate the image-level information to predict user-level interest distribution. Thus, we employ a Convolutional Neural Network architecture pre-trained on ImageNet dataset for feature extraction. The suggested system is based on the construction of users' interests ontology in order to learn the semantic representation for the popular Topics of interests defined by social networks (e.g., Facebook). Our experiments show that the analysis enhances the overall prediction performance. We set forth a novel evaluation database to improve our framework's robustness and enhance its ability to generalize to new user profiles. Our proposed framework has shown promising results. Our method yields a competitive accuracy of 0.80 compared with the state of the art techniques.

DeepVisInterests : deep data analysis for topics of interest prediction / O. Lazzez, A.M. Qahtani, A. Alsufyani, O. Almutiry, H. Dhahri, V. Piuri, A.M. Alimi. - In: MULTIMEDIA TOOLS AND APPLICATIONS. - ISSN 1380-7501. - 82:26(2023), pp. 40913-40936. [10.1007/s11042-023-14806-2]

DeepVisInterests : deep data analysis for topics of interest prediction

V. Piuri;
2023

Abstract

Deep data analysis for latent information prediction has become an increasingly important area of research. In order to predict users' interests and other latent attributes, most of the existing solutions (works, studies) have used textual data and have obtained accurate results. However, little attention has been paid to visual data that have become increasingly popular in recent years. This paper addresses the problem of discovering the attributed interests and analyzing the performance of the automatic prediction by a comparison with the self-assessed topics of interests (topics of interests provided by the user in a proposed questionnaire) based on data analysis techniques applied to users' visual data. We analyze the content of individual images and aggregate the image-level information to predict user-level interest distribution. Thus, we employ a Convolutional Neural Network architecture pre-trained on ImageNet dataset for feature extraction. The suggested system is based on the construction of users' interests ontology in order to learn the semantic representation for the popular Topics of interests defined by social networks (e.g., Facebook). Our experiments show that the analysis enhances the overall prediction performance. We set forth a novel evaluation database to improve our framework's robustness and enhance its ability to generalize to new user profiles. Our proposed framework has shown promising results. Our method yields a competitive accuracy of 0.80 compared with the state of the art techniques.
Deep data analysis; Convolutional neural networks; Online social networks; Deep learning; Ontology; Users' interest
Settore ING-INF/05 - Sistemi di Elaborazione delle Informazioni
2023
Article (author)
File in questo prodotto:
File Dimensione Formato  
s11042-023-14806-2.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 3.55 MB
Formato Adobe PDF
3.55 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1034390
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact