I am an ELLIS PhD student in Machine Learning at the University of Copenhagen, supervised by Ole Winther. My research focuses on applying deep learning methods to biological applications.
Previously, I worked at DeepLab, where I developed generative models for transcriptomics and deep neural networks for EEG-based brain-computer interfaces.
I hold a BSc and MSc degree in Electrical and Computer Engineering from NTUA, where I completed my thesis on visual emotion recognition under the supervision of Petros Maragos.
Note: If you are planning to apply for the ELLIS PhD program, feel free to send me questions about the program.
Deep learning methods for RNA sequencing data have exploded in the recent years due to the advent of singlecell RNA sequencing (scRNA-seq), which enables the study of multiple cells per-patient simultaneously. However, in the case of rare cell types, data scarcity continues to exist, posing several challenges, while preventing the exploitation of deep learning models’ full predictive power. Generating realistic synthetic cells to augment the data could allow for more informative subsequent downstream analyses. Herein, we introduce Mask-cscGAN, a conditional generative adversarial network (GAN) that generates realistic synthetic cells with desired characteristics managing also to model genes’ sparsity through learning a mask of zeros. Employed for the augmentation of a glioblastoma multiforme (GBM) malignant cells dataset, Mask-cscGAN generates realistic synthetic cells of desired cancer subtypes. Generating cells of a rare cancer subtype, Mask-cscGAN improves the classification performance of the rare cancer subtype by 12.29%. Mask-cscGAN is the first to generate realistic synthetic cells belonging to specified cancer subtypes, and augmentation with Mask-cscGAN outperforms state-of-the-art methods in rare cancer subtype classification.
SMC 2023
Beyond Within-Subject Performance: A Multi-Dataset Study of Fine-Tuning in the EEG Domain
Christina Sartzetaki, Panagiotis Antoniadis, Nick Antonopoulos, and 4 more authors
In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, 2023
There is a critical demand for BCI systems that can swiftly adapt to a new user and at the same time function with any user. We propose a fine-tuning approach for neural networks that serves a dual purpose; first, to minimize calibration times through requiring considerably less data - up to one-sixth - from the target subject than training from scratch, and second, to alleviate cases of user illiteracy by providing a substantial performance boost of over 11% in absolute accuracy from the features learned from other subjects. Ultimately, our adaptation method surpasses standard within-subject performance by a large margin in all subjects. We present ablation studies across three datasets, in which we demonstrate that fine-tuning outperforms other adaptation methods for BCI systems and that what matters most is the quantity of pre-training subjects, rather than their BCI-ability, achieving over 8% absolute increase in classification accuracy when scaling up the order of magnitude. Finally, we compare our approach to the state-of-the-art in EEG-based motor imagery and find it comparable, if not superior, to methods employing far more complex neural networks, obtaining 82.60% and 85.64% within-subject accuracy in the four-class BCIC IV-2a and binary MMI datasets respectively.
FG 2021
Exploiting Emotional Dependencies with Graph Convolutional Networks for Facial Expression Recognition
Panagiotis Antoniadis, Panagiotis Paraskevas Filntisis, and Petros Maragos
In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, 2021
Over the past few years, deep learning methods have shown remarkable results in many face-related tasks including automatic facial expression recognition (FER) in-the-wild. Meanwhile, numerous models describing the human emotional states have been proposed by the psychology community. However, we have no clear evidence as to which representation is more appropriate and the majority of FER systems use either the categorical or the dimensional model of affect. Inspired by recent work in multi-label classification, this paper proposes a novel multi-task learning (MTL) framework that exploits the dependencies between these two models using a Graph Convolutional Network (GCN) to recognize facial expressions in-the-wild. Specifically, a shared feature representation is learned for both discrete and continuous recognition in a MTL setting. Moreover, the facial expression classifiers and the valence-arousal regressors are learned through a GCN that explicitly captures the dependencies between them. To evaluate the performance of our method under real-world conditions we perform extensive experiments on the AffectNet and Aff-Wild2 datasets. The results of our experiments show that our method is capable of improving the performance across different datasets and backbone architectures. Finally, we also surpass the previous state-of-the-art methods on the categorical model of AffectNet.
# For inquiries or collaborations, the best way to reach me is via email.