<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>DSpace Comunidad :</title>
    <link>https://hdl.handle.net/11000/410</link>
    <description />
    <pubDate>Tue, 28 Apr 2026 17:37:54 GMT</pubDate>
    <dc:date>2026-04-28T17:37:54Z</dc:date>
    <item>
      <title>Gradient tree boosting and the estimation of production frontiers</title>
      <link>https://hdl.handle.net/11000/39748</link>
      <description>Título : Gradient tree boosting and the estimation of production frontiers
Autor : Guillén García, María Dolores; Aparicio Baeza, Juan; Esteve, Miriam
Resumen : In production theory and engineering, a topic of interest is the determination of technical efficiency of firms from the estimation of a technology. By definition, a technology must satisfy a set of micro-economic postulates. Likewise, a valid estimator of a technology should meet the same set of axioms. In this paper, for the first time, we adapt the Gradient Tree Boosting algorithm with the objective of estimating production technologies, satisfying the required theoretical conditions. The new approach shares similarities with the standard Free Disposal Hull (FDH) methodology, but with the advantage that it avoids the typical problem of overfitting. Finally, the performance of the new approach based on boosting is measured through a computational experience, determining that the technique based upon boosting decreases the mean squared error by more than 35% with respect to FDH.</description>
      <pubDate>Wed, 15 Apr 2026 08:45:43 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39748</guid>
      <dc:date>2026-04-15T08:45:43Z</dc:date>
    </item>
    <item>
      <title>Measuring dynamic inefficiency through machine learning techniques</title>
      <link>https://hdl.handle.net/11000/39747</link>
      <description>Título : Measuring dynamic inefficiency through machine learning techniques
Autor : Aparicio Baeza, Juan; Esteve, Miriam; Kapelko, Magdalena
Resumen : This paper contributes by developing new models for assessing dynamic inefficiency that incorporate machine learning techniques. In particular, the new approaches apply decision trees models for the estimation of dynamic production technologies that account for investment adjustment costs. Methodologically, the new models build on the recently developed techniques of Efficiency Analysis Trees (EAT) and Convexified Efficiency Analysis Trees (CEAT) and extend them even further to a dynamic framework comprising dynamic EAT and CEAT models. The study compares dynamic inefficiency scores estimated assuming the new models against the traditional dynamic free disposal hull (FDH) and dynamic data envelopment analysis (DEA). Our empirical application focuses on dairy manufacturing firms in the main dairy processing countries in the European Union for the years 2014 and 2018. The results show that inefficiency related to the dynamic CEAT or EAT is higher than their corresponding values calculated through the dynamic DEA or FDH. The discriminating power of dynamic DEA (dynamic FDH) drastically improves when switching to dynamic CEAT (dynamic EAT). Finally, the differences between countries are observed regarding the development of dynamic inefficiency in the period associated with milk quota abolition.</description>
      <pubDate>Wed, 15 Apr 2026 08:44:58 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39747</guid>
      <dc:date>2026-04-15T08:44:58Z</dc:date>
    </item>
    <item>
      <title>Enhancing the measurement of firm inefficiency accounting for corporate social responsibility: A dynamic data envelopment analysis fuzzy approach</title>
      <link>https://hdl.handle.net/11000/39746</link>
      <description>Título : Enhancing the measurement of firm inefficiency accounting for corporate social responsibility: A dynamic data envelopment analysis fuzzy approach
Autor : Aparicio Baeza, Juan; Kapelko, Magdalena; Ortiz Henarejos, Lidia
Resumen : This paper contributes to research on the corporate social responsibility (CSR) field and the inefficiency measurement of firms by proposing a new method for evaluating inefficiency accounting for firms’ CSR activities. The new approach considers the imprecise nature of CSR data through the fuzzy data envelopment analysis (FDEA) method and further extends it by allowing for the dynamic interdependence of firms’ production decisions through adjustment costs, related to firms investments. In addition, the new method deals with zero or negative values for inputs and/or outputs of the data. The empirical application used in this paper considers a dataset of CSR activities of European firms for three industries (capital, consumption, and other) over the period 2014–2016. Two main results are found with this data. First, the study shows that fuzzy dynamic inefficiencies tend to be lower than these obtained from the conventional crisp evaluation of inefficiency. Second, the study finds some differences in dynamic inefficiencies at distinct levels of fuzziness. Overall, the results seem to confirm that the usage of dynamic fuzzy methodology adds some value to the standard crisp approach.</description>
      <pubDate>Wed, 15 Apr 2026 08:44:01 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39746</guid>
      <dc:date>2026-04-15T08:44:01Z</dc:date>
    </item>
    <item>
      <title>The standard total factor productivity index and its decomposition</title>
      <link>https://hdl.handle.net/11000/39745</link>
      <description>Título : The standard total factor productivity index and its decomposition
Autor : Aparicio Baeza, Juan; Santín González, Daniel
Resumen : The Malmquist productivity index is one of the best known and most widely used measures in the economic literature to quantify and decompose changes in productivity of multi-input multi-output production processes over time. Two main approaches are used to calculate this index: the adjacent Malmquist index and the base period Malmquist index. No base period is required to calculate the adjacent Malmquist index, but it fails to comply with the circularity property. The base period Malmquist index uses the technology of a base period and is circular, but the base period choice is arbitrary. There is, therefore, a trade-off between the choice of one or another version of the Malmquist index. The aim of this paper is to introduce a new total factor productivity index that is simultaneously circular and does not need to resort to a base period or ad hoc reference. To this end, we propose a new multi-input multi-output reference production technology for use as a standard for measuring and decomposing total factor productivity changes. The standard production technology is conceptually attractive because its parameterization is versatile and adaptable to the evolution of a set of firms performing any multi-input multi-output production process. Additionally, the new approach can bring about a true total factor productivity index, which can be decomposed into an output change and an input change.</description>
      <pubDate>Wed, 15 Apr 2026 08:41:43 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39745</guid>
      <dc:date>2026-04-15T08:41:43Z</dc:date>
    </item>
    <item>
      <title>Decomposing profit change: Konüs, Bennet and Luenberger indicators</title>
      <link>https://hdl.handle.net/11000/39744</link>
      <description>Título : Decomposing profit change: Konüs, Bennet and Luenberger indicators
Autor : Aparicio Baeza, Juan; Zofío, José L.
Resumen : We introduce complementary decompositions of profit change that, relying on the duality between the profit function and the directional distance function, shed light on the different sources of profit growth including measures of technical efficiency, allocative efficiency and technological change. Our decompositions extend the literature on Konüs and Bennet quantity and price indicators to profit change. The first decomposition is ‘exact’ in the sense of Diewert, by completely exhausting the sources of profit change into profit inefficiency change (including technical and allocative inefficiency change), technological change, and output and input price change. The second decomposition equates the Bennet quantity indicator to a productivity measure represented by the Luenberger indicator plus allocative inefficiency change. We deem it ‘complete’ because in contrast to the existing literature, it retains the information on allocative inefficiency change while preventing the existence of residual terms capturing price variations, whose meaningful interpretation has not been addressed until now. Our proposed solution takes advantage of the flexibility of the directional distance function when choosing a suitable directional vector. All decompositions have the same structural form and therefore their components can be compared to each other vis-à-vis, providing alternative measures of equivalent sources of profit growth.</description>
      <pubDate>Wed, 15 Apr 2026 08:40:38 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39744</guid>
      <dc:date>2026-04-15T08:40:38Z</dc:date>
    </item>
    <item>
      <title>Enhancing the Benefit of the Doubt model through ‘Ensemble-DEA’: achieving the Sustainable Development Goals</title>
      <link>https://hdl.handle.net/11000/39743</link>
      <description>Título : Enhancing the Benefit of the Doubt model through ‘Ensemble-DEA’: achieving the Sustainable Development Goals
Autor : Aparicio Baeza, Juan; Kapelko, Magdalena; Monge, Juan Francisco; Zofío, José L.
Resumen : This study presents an innovative approach for constructing composite indicators by combining the Benefit of the Doubt method from Data Envelopment Analysis with ensemble techniques, i.e., ‘Ensemble-DEA’, with randomization in observations and variables selection. Our methodology mitigates the curse of dimensionality, which limits the effectiveness of traditional approaches when dealing with numerous indicators. By maintaining data integrity and improving robustness through an ensemble-based technique, our method delivers high-discriminatory power and clear rankings for Decision Making Units. Additionally, it enhances benchmarking capabilities by offering unit-specific peer comparisons. Our contributions therefore include the development of robust composite indicators and improved benchmarking insights, ensuring their reliability even in high-dimensional settings. We validate our approach using a real-world dataset containing 72 indicators aligned with Sustainable Development Goals for European Union countries. The results show that performance in meeting Sustainable Development Goals is correlated with the level of socioeconomic development and environmental consciousness. In particular, Scandinavian, Northern European and Benelux countries tend to perform best, while Eastern European countries lag in sustainability effectiveness. Furthermore, a comparative analysis against conventional methods underscores the advantages of our approach in managing complex datasets, specifically in terms of improvement in discriminatory power and benchmarking opportunities.</description>
      <pubDate>Wed, 15 Apr 2026 07:23:48 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39743</guid>
      <dc:date>2026-04-15T07:23:48Z</dc:date>
    </item>
    <item>
      <title>The generalized range adjusted measure in data envelopment analysis: Properties, computational aspects and duality</title>
      <link>https://hdl.handle.net/11000/39742</link>
      <description>Título : The generalized range adjusted measure in data envelopment analysis: Properties, computational aspects and duality
Autor : Aparicio Baeza, Juan; Monge, Juan Francisco
Resumen : The measurement of technical efficiency is a topic of great interest in microeconomics and engineering. Data Envelopment Analysis (DEA) is one of the existing techniques for measuring technical efficiency. One of the challenges related to DEA is to introduce a “well-defined” efficiency measure. Overall, it means that the technical efficiency measure should satisfy a list of mathematical and economical properties. Regarding this point, an unresolved question in the DEA literature to date, is whether any measure can satisfy both Indication, also called Pareto-efficiency identification, and uniqueness of the projection point generated by the corresponding efficiency optimization model. With this issue in mind, this paper introduces a new family of measures, inspired on the Range-Adjusted Measure (RAM), which satisfy a list of six properties. This family of measures will be called Generalized Range-Adjusted Measure (GRAM). Additionally, we show in this paper how GRAM can be implemented from a computational point of view and we also provide an economical interpretation of its dual program in terms of (shadow) profit maximization. Finally, an empirical example extracted from the literature serves to illustrate the new methodology.</description>
      <pubDate>Wed, 15 Apr 2026 07:23:07 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39742</guid>
      <dc:date>2026-04-15T07:23:07Z</dc:date>
    </item>
    <item>
      <title>Comparing the behaviour of basic linear algebra routines on multicore platforms</title>
      <link>https://hdl.handle.net/11000/39038</link>
      <description>Título : Comparing the behaviour of basic linear algebra routines on multicore platforms
Autor : Cuenca, Javier; García, Luis P.; Giménez, Domingo; Quesada-Martínez, Manuel
Resumen : The use of an OpenMP compiler optimized for a multicore system could contribute&#xD;
to obtain programs with satisfactory execution time, but it is possible to have access in a&#xD;
system to more than one compiler, and different compilers optimize different parts of the&#xD;
code at different levels. In this paper, the influence of the compiler used on the performance&#xD;
of linear algebra routines is studied. From the results of the experiments carried&#xD;
out, we conclude a poly-compiling approach is necessary to decide automatically the best&#xD;
compiler.</description>
      <pubDate>Tue, 27 Jan 2026 12:55:55 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39038</guid>
      <dc:date>2026-01-27T12:55:55Z</dc:date>
    </item>
    <item>
      <title>Modelado y autooptimización en esquemas paralelos de backtracking</title>
      <link>https://hdl.handle.net/11000/39037</link>
      <description>Título : Modelado y autooptimización en esquemas paralelos de backtracking
Autor : Quesada-Martínez, Manuel; Giménez, Domingo
Resumen : En este trabajo se estudia el modelado&#xD;
del tiempo de ejecuci´on de esquemas paralelos de backtracking.&#xD;
Se estudian diferentes esquemas de programaci&#xD;
´on identificando par´ametros que influyen en el&#xD;
tiempo de ejecuci´on. Se propone una metodolog´ıa&#xD;
de optimizaci´on basada en la selecci´on autom´atica de&#xD;
par´ametros para obtener ejecuciones con un tiempo&#xD;
reducido. Se estudian las propiedades que deber´ıa&#xD;
tener un esquema algor´ıtmico secuencial de backtracking&#xD;
para poder realizar internamente la paralelizaci´on.&#xD;
De esta manera se pretende abstraer a los usuarios&#xD;
finales de las complejidades de la programaci´on paralela.</description>
      <pubDate>Tue, 27 Jan 2026 12:53:34 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39037</guid>
      <dc:date>2026-01-27T12:53:34Z</dc:date>
    </item>
    <item>
      <title>Analysis of the Influence of the Compiler on Multicore Performance</title>
      <link>https://hdl.handle.net/11000/39036</link>
      <description>Título : Analysis of the Influence of the Compiler on Multicore Performance
Autor : Cuenca, Javier; García, Luis P; Giménez, Domingo; Quesada-Martínez, Manuel
Resumen : The possibility of connecting several nodes in a&#xD;
network of processors has popularized parallel programming&#xD;
in the scientific community, but its use has been limited by the&#xD;
difficulty of message-passing programming. With the arrival&#xD;
of multicore processors, parallel programming has regained&#xD;
popularity. The use of an OpenMP compiler optimized for&#xD;
the multicore system in question is a good option, but it is&#xD;
possible to have access in a system to more than one compiler&#xD;
and different compilers can appropriately optimize different&#xD;
parts of the code. In this paper we study theoretically and&#xD;
experimentally the influence of the compiler on performance&#xD;
of routines. We conclude that a poly-compiling approach that&#xD;
decides the best compiler for each situation is necessary.</description>
      <pubDate>Tue, 27 Jan 2026 12:52:56 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39036</guid>
      <dc:date>2026-01-27T12:52:56Z</dc:date>
    </item>
  </channel>
</rss>

