Comparing Optimization Algorithms in ANN Models for House Price Prediction in Pekanbaru

This study evaluates the performance of five optimization algorithms in Artificial Neural Network (ANN) models for predicting house prices in Pekanbaru. The optimizers tested include Adam, AdaDelta, Stochastic Gradient Descent (SGD), Nadam, and Adaptive Sharpness-Aware Minimization (ASAM). A total o...

Full description

Saved in:
Bibliographic Details
Published in:Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) (Online) Vol. 9; no. 4; pp. 781 - 787
Main Authors: Winarso, Doni, Edo Arribe, Syahril, Aryanto, Muhardi, Shahrulniza Musa
Format: Journal Article
Language:English
Published: 17.08.2025
ISSN:2580-0760, 2580-0760
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study evaluates the performance of five optimization algorithms in Artificial Neural Network (ANN) models for predicting house prices in Pekanbaru. The optimizers tested include Adam, AdaDelta, Stochastic Gradient Descent (SGD), Nadam, and Adaptive Sharpness-Aware Minimization (ASAM). A total of 3,149 house sales records were collected from rumah123.com between January and December 2024. After cleaning 148 incomplete entries, 3,001 valid records remained. The dataset included seven features: price, location, number of bedrooms, number of bathrooms, land area, building area, and garage capacity, with the location encoded using one-hot encoding. The research involved a literature review, problem formulation, data acquisition, preprocessing, model development, and evaluation. Model performance was assessed using the Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). The results show that SGD consistently achieved the best performance, particularly at a 90:10 train-test split, with the lowest MAPE (1.74%) and MSE (0.3279). Adam and Nadam also performed well, while ASAM had the highest error (MAPE 6.14%). These findings indicate that SGD was the most effective optimizer for this dataset. Future research should explore larger datasets and advanced hyperparameter tuning to improve the generalizability of this model.
ISSN:2580-0760
2580-0760
DOI:10.29207/resti.v9i4.6619