LA INTELIGENCIA DE NEGOCIOS TOMANDO UN ENFOQUE CON SQOOP, FLUME Y HDFS EN HADOOP.

Saved in:
Bibliographic Details
Title: LA INTELIGENCIA DE NEGOCIOS TOMANDO UN ENFOQUE CON SQOOP, FLUME Y HDFS EN HADOOP. (Spanish)
Alternate Title: BUSINESS INTELLIGENCE TAKING AN APPROACH WITH SQOOP, FLUME AND HDFS ON HADOOP. (English)
Authors: Mendoza Loor, José Javier, Bonilla Vimos, Washington Ramiro, Quezada Valencia, Segundo Isaías, Torres Bueno, María Belén
Source: Revista Científica Arbitrada Multidisciplinaria PENTACIENCIAS; jul-sep2024, Vol. 6 Issue 5, p53-63, 11p
Subject Terms: DISTRIBUTED computing, COMPUTER workstation clusters, OPEN source software, SOFTWARE frameworks, BUSINESS intelligence
Abstract (English): In the Hadoop ecosystem, Sqoop, Flume and Hdfs (Hadoop Distributed File System), have played important and complementary roles in handling and processing large volumes of data, the combination of these tools has allowed organizations to handle large volumes of data from various sources, both structured and unstructured, in an efficient and scalable manner in the Hadoop ecosystem. Business intelligence (BI) with Hadoop has offered a powerful and scalable way to manage, analyze and extract value from large volumes of data, with its ability to process large amounts of data distributed across server clusters, they have been integrated with various business intelligence tools to provide valuable business data. As is known, Hadoop has been considered as an open source software framework that has allowed the distributed processing of data volumes in computer clusters using programming models. It has been seen as an essential tool in the field of big data and has been designed to scale from single servers to thousands of machines, each offering local computing and storage. These environments have analyzed customers and their behavior, optimized supply chains by analyzing data in real time, improving the operational efficiency of companies, with their fraud detection by processing large volumes of data, in addition to having segmented customers for effective marketing campaigns. [ABSTRACT FROM AUTHOR]
Abstract (Spanish): En el ecosistema de Hadoop, Sqoop, Flume y Hdfs (Hadoop Distributed File System), han jugado roles importantes y complementarios en el manejo y procesamiento de grandes volúmenes de datos, la combinación de estas herramientas ha permitido a las organizaciones manejar grandes volúmenes de datos provenientes de diversas fuentes, tanto estructuradas como no estructuradas, de manera eficiente y escalable en el ecosistema Hadoop. La inteligencia de negocios (BI) con Hadoop ha ofrecido una forma poderosa y escalable de manejar, analizar y extraer valor de grandes volúmenes de datos, con su capacidad para procesar grandes cantidades de datos distribuidos en clusters de servidores, se han integrado con diversas herramientas de inteligencia de negocios para proporcionar datos empresariales valiosos. Como se sabe, Hadoop ha sido considerado como un framework de software de código abierto que ha permitido el procesamiento distribuido de volúmenes de datos en clusters de ordenadores utilizando modelos de programación. Ha sido vista como una herramienta esencial en el ámbito del big data y ha sido diseñada para escalar desde servidores individuales hasta miles de máquinas, cada una ofreciendo computación y almacenamiento local. Estos entornos han analizado a los clientes y su comportamiento, han optimizado las cadenas de suministro analizando los datos en tiempo real mejorando la eficiencia operativa de las empresas, con su detección de fraude procesando grandes volúmenes de datos, además de haber segmentado clientes para campañas efectivas de marketing. [ABSTRACT FROM AUTHOR]
Copyright of Revista Científica Arbitrada Multidisciplinaria PENTACIENCIAS is the property of Revista Científica Arbitrada Multidisciplinaria PENTACIENCIAS and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Database: Complementary Index
Description
Abstract:In the Hadoop ecosystem, Sqoop, Flume and Hdfs (Hadoop Distributed File System), have played important and complementary roles in handling and processing large volumes of data, the combination of these tools has allowed organizations to handle large volumes of data from various sources, both structured and unstructured, in an efficient and scalable manner in the Hadoop ecosystem. Business intelligence (BI) with Hadoop has offered a powerful and scalable way to manage, analyze and extract value from large volumes of data, with its ability to process large amounts of data distributed across server clusters, they have been integrated with various business intelligence tools to provide valuable business data. As is known, Hadoop has been considered as an open source software framework that has allowed the distributed processing of data volumes in computer clusters using programming models. It has been seen as an essential tool in the field of big data and has been designed to scale from single servers to thousands of machines, each offering local computing and storage. These environments have analyzed customers and their behavior, optimized supply chains by analyzing data in real time, improving the operational efficiency of companies, with their fraud detection by processing large volumes of data, in addition to having segmented customers for effective marketing campaigns. [ABSTRACT FROM AUTHOR]
ISSN:28065794
DOI:10.59169/pentaciencias.v6i4.1177