CLASSIFYING DEEP MOONQUAKES USING MACHINE LEARNING ALGORITHMS
Introduction: The Apollo Lunar Surface Experiment Package (ALSEP) recorded lunar seismic activity continuously between 1969 and 1977 [1]. These seismic data provided observations of deep moonquakes (DMQs) manifest as repeating tidallylinked signals from sources located in geographically tight regions, called nests. Sources from the same nest have similar waveforms that are distinct from events originating in other nests. Here we explore the potential of machine learning algorithms such as convolutional neural nets (CNNs) to differentiate between the multiple DMQ event hypocenters. Background: The ALSEP comprised 4 seismic stations placed on the near side of the moon between 1969 and 1972, and continuously collected seismic data and transmitted it in real-time back to Earth until instrument shut-off in 1977 [1]. In that time, the network detected approximately 12,000 seismic events, the most numerous of which are deep moonquakes on the long period seismometers. Deep moonquakes (DMQs) are repeated lunar seismic events occurring at focal depths between 800 km and 1200 km [2]. These events originate from 319 source regions, or clusters, and are observed to have 13.6-day, 27-day, and 206day periodicities, indicating that the build-up and release of tidal stresses caused by the interaction between the Earth, Moon, and Sun play a role in the DMQ source mechanisms [3]. DMQ events have been valuable for determining lunar interior structure, as their arrival times can be used to derive mantle Pand S-wave velocities[4] and other body waves, such as core reflections [5]. The identification and classification of events in the ALSEP data was initially conducted using visual inspection of day-long seismograms [1]. Computational advancements have enabled the application of new techniques that identified more DMQs: a combination of waveform cross-correlation and cluster analysis positively identified 5905 new moonquakes and 88 new DMQ nests [6], and a crosscorrelation algorithm combined with an algorithm to de-glitch Apollo data resulted in 123 new events for the A1 DMQ cluster alone[7]. The Apollo seismic data is difficult to analyze because of low signal to noise ratio and instrument glitches that create spikes and/or gaps in the data time series. Current work: Previously, we used a convolutional neural net (CNN) to identify and classify deep moonquake data. DMQ events from clusters A1 and A8, identified in the most recently updated lunar seismic event catalog [7, 9] and recorded on the Apollo 12 long period (LP) three-component seismometers. Spectrograms were made from these events and used to train several image classification CNNs to identify the difference between an A1 and an A8 DMQ. Seven different models were trained and tested on the spectrograms; despite various modifications to the CNN architecture, the validation accuracies of the CNNs do not increase beyond 70.1% as shown in Figure 1, indicating that the algorithms are not learning effectively. These results imply that image classifier CNNs are inefficient with spectrograms; therefore, a one-dimensional approach might work better.