Joint analysis of neuroimaging data from multiple modalities has the potential to improve our understanding of brain function since each modality provides complementary information. In this paper, we address the problem of jointly analyzing functional magnetic resonance imaging (fMRI), structural MRI (sMRI) and electroencephalography (EEG) data collected during an auditory oddball (AOD) task with the goal of capturing neural patterns that differ between patients with schizophrenia and healthy controls. Traditionally, fusion methods such as joint independent component analysis (jICA) have been used to jointly analyze such multi-modal neuroimaging data. However, previous jICA analyses typically analyze the EEG signal from a single electrode or concatenate signals from multiple electrodes, thus ignoring the potential multilinear structure of the EEG data, and models the data using a common mixing matrix for both modalities. In this paper, we arrange the multi-channel EEG signals as a third-order tensor with modes: subjects, time samples and electrodes, and jointly analyze the tensor with the fMRI and sMRI data, both in the form of subjects by voxels matrices, using a structure-revealing coupled matrix and tensor factorization (CMTF) model. Through this modeling approach, we (i) exploit the multilinear structure of multi-channel EEG data and (ii) capture weights for components indicative of the level of contribution from each modality. We compare the results of the structurerevealing CMTF model with those of jICA and demonstrate that, while both models capture significant distinguishing patterns between patients and controls, the structure-revealing CMTF model provides more robust activation.