A Study of Wordle Game Results Based on Deep Learning

Wordle is a popular daily puzzle currently offered by The New York Times. Players attempt to solve the puzzle by guessing a five-letter word six times or less, receiving feedback for each guess. We use a Savitzky-Golay filter to handle the noise in the data. Then, we built an LSTM model to predict t...

Full description

Saved in:
Bibliographic Details
Published in:2023 7th International Conference on Computer, Software and Modeling (ICCSM) pp. 20 - 24
Main Authors: Gong, Zhenxi, Yao, Junchen, Qian, Weiyi, Zhou, Yuhang, Han, Jiaxing
Format: Conference Proceeding
Language:English
Published: IEEE 21.07.2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Wordle is a popular daily puzzle currently offered by The New York Times. Players attempt to solve the puzzle by guessing a five-letter word six times or less, receiving feedback for each guess. We use a Savitzky-Golay filter to handle the noise in the data. Then, we built an LSTM model to predict the number of reported outcomes. To obtain the best parameters n and m, the model was trained using the early stopping mechanism and the best parameters n and m were input into the model. And this model was used to predict the number of reported outcomes on March 1, 2023. In addition, a CCNN model was developed to extract different attribute features of words separately to predict the distribution of reported results. The constructed CCNN model was also used to predict the results for specific words, i.e., the percentage of predicted results reported on that day. The predicted results start from 1 to 6 and X with 0.46%, 5.96%, 23.06%, 32.51%, 23.63%, 11.58% and 2.76%, respectively. Finally, we use Gaussian clustering algorithm to cluster the word difficulty, and after judging the difficulty score can be 3 levels. Since we were not sure which classifier had the highest classification accuracy, we chose the CCNN algorithm to predict the difficulty classification of words in several attempts.
DOI:10.1109/ICCSM60247.2023.00013