<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>DSpace Comunidad :</title>
  <link rel="alternate" href="https://hdl.handle.net/11000/30411" />
  <subtitle />
  <id>https://hdl.handle.net/11000/30411</id>
  <updated>2026-04-28T16:57:29Z</updated>
  <dc:date>2026-04-28T16:57:29Z</dc:date>
  <entry>
    <title>Engineering a Scalable Laboratory Infrastructure for Assembly Language Scaffolding: Design and Deployment of a Locally Optimized GenAI Assistant for the CODE-2 Educational Architecture</title>
    <link rel="alternate" href="https://hdl.handle.net/11000/39810" />
    <author>
      <name>García Crespí, Federico</name>
    </author>
    <id>https://hdl.handle.net/11000/39810</id>
    <updated>2026-04-28T01:03:28Z</updated>
    <published>2026-04-27T07:58:03Z</published>
    <summary type="text">Título : Engineering a Scalable Laboratory Infrastructure for Assembly Language Scaffolding: Design and Deployment of a Locally Optimized GenAI Assistant for the CODE-2 Educational Architecture
Autor : García Crespí, Federico
Resumen : The transition from high-level programming to assembly language constitutes a welldocumented&#xD;
pedagogical bottleneck in computer engineering curricula, particularly in largecohort&#xD;
laboratory settings where individualized scaffolding cannot scale. This paper presents&#xD;
the design, implementation, and technical evaluation of a locally deployable generative AI&#xD;
assistant engineered specifically for the CODE-2 educational processor architecture. The&#xD;
system is intended as laboratory infrastructure, not as a replacement for human instruction;&#xD;
its primary contribution is enabling scalable, privacy-preserving syntax scaffolding without&#xD;
dependency on cloud services or internet connectivity. A synthetic task bank of 50,000&#xD;
instruction pairs was procedurally generated to cover the full CODE-2 curriculum. Three&#xD;
fine-tuning strategies were compared on a consumer GPU: Prompt Tuning, Low-Rank&#xD;
Adaptation (LoRA), and Full Fine-Tuning of a T5-Small encoder-decoder model. Full Fine-&#xD;
Tuning achieved 94.10% Exact Match on the held-out evaluation set, demonstrating that&#xD;
rigid assembly syntax requires full parameter adaptation. Post-training INT8 quantization&#xD;
via ONNX Runtime reduced inference latency by 69% (from 1,689 ms to 526 ms) on&#xD;
standard laboratory hardware (Intel i5, 8 GB RAM), with a precision loss below 1%. The&#xD;
resulting system operates entirely offline, precluding data exfiltration by design. The system&#xD;
is integrated into laboratory workflows as a supervised scaffolding tool, requiring mandatory&#xD;
emulator-based verification of all AI-generated code. Pedagogical implications are discussed&#xD;
as plausible benefits; no controlled learning-gains study is reported. The work demonstrates&#xD;
a replicable pipeline for building domain-specific language model infrastructure tailored to&#xD;
CPU-only educational environments.</summary>
    <dc:date>2026-04-27T07:58:03Z</dc:date>
  </entry>
  <entry>
    <title>Software architecture for real-time hyperspectral analysis in material sorting systems</title>
    <link rel="alternate" href="https://hdl.handle.net/11000/39775" />
    <author>
      <name>Sarriás, Adrián</name>
    </author>
    <author>
      <name>Martínez-Rach, Miguel O.</name>
    </author>
    <author>
      <name>López-Granado, Otoniel</name>
    </author>
    <author>
      <name>Migallón, Héctor</name>
    </author>
    <id>https://hdl.handle.net/11000/39775</id>
    <updated>2026-04-17T01:07:53Z</updated>
    <published>2026-04-16T07:59:22Z</published>
    <summary type="text">Título : Software architecture for real-time hyperspectral analysis in material sorting systems
Autor : Sarriás, Adrián; Martínez-Rach, Miguel O.; López-Granado, Otoniel; Migallón, Héctor
Resumen : The integration of hyperspectral imaging into industrial sorting systems has enabled high-precision classification of materials with similar visual characteristics but different chemical compositions. However, the real-time processing demands of HSI data acquisition, characterised by high spectral and spatial resolution, require advanced computational strategies. This paper presents a scalable and efficient software architecture designed for real-time hyperspectral analysis in automated material sorting lines. The architecture exploits heterogeneous and homo geneous parallelism to distribute pre-processing, classification and segmentation tasks across multiple threads and processing cores. Two classification methods, based on Spectral Angle Mapper and Artificial Neural Networks, are developed and evaluated, both show high accuracy in material identification, but they impact system scalability in different ways. Extensive performance tests show that the proposed framework meets strict timing constraints and maintains low-latency operation on standard multi-core CPU systems. The modular design of the system ensures adaptability to different hardware configurations and material types, supporting future scalability and integration into diverse industrial environments. The real-time constraint imposed by the camera’s maximum frame rate is 1.493𝑚𝑠. Thanks to the optimisations applied, the critical processes, pre-processing and classification, have been reduced to just over 30𝜇𝑠 each, consuming only about 5% of the available time and leaving almost 95% free for additional operations or performance enhancements. This results in a system that is scalable both from a computational perspective and in terms of increasing the overall performance of the industrial plant.</summary>
    <dc:date>2026-04-16T07:59:22Z</dc:date>
  </entry>
  <entry>
    <title>Saliency Dataset and Predictive Model for Areas of Interest in VVC Perceptual Coding</title>
    <link rel="alternate" href="https://hdl.handle.net/11000/39591" />
    <author>
      <name>Kessler Martín, Jorge</name>
    </author>
    <author>
      <name>Fernández Lagos, Pablo</name>
    </author>
    <author>
      <name>García Lucas, David</name>
    </author>
    <author>
      <name>Cebrián Márquez, Gabriel</name>
    </author>
    <author>
      <name>Ríos, Belén</name>
    </author>
    <author>
      <name>Vigueras, Guillermo</name>
    </author>
    <author>
      <name>Díaz Honrubia, Antonio Jesús</name>
    </author>
    <id>https://hdl.handle.net/11000/39591</id>
    <updated>2026-03-27T02:06:36Z</updated>
    <published>2026-03-26T12:00:04Z</published>
    <summary type="text">Título : Saliency Dataset and Predictive Model for Areas of Interest in VVC Perceptual Coding
Autor : Kessler Martín, Jorge; Fernández Lagos, Pablo; García Lucas, David; Cebrián Márquez, Gabriel; Ríos, Belén; Vigueras, Guillermo; Díaz Honrubia, Antonio Jesús
Resumen : Video coding standardization organizations have invested significant efforts in achieving greater compression factors over the years. Approved in 2020, the Versatile Video Coding (VVC) standard reduces the bit rate needed to encode a sequence by half compared to its predecessor. However, users today have increasingly demanding requirements, leading to a significant rise in video traffic on the Internet. In this context, perceptual video coding aims to reduce video bit rate by decreasing the objective quality while maintaining the subjective quality. This work presents a novel dataset designed for training models to predict video saliency, i.e., areas in the video to which viewers are more likely to pay attention. The dataset is publicly available. Furthermore, this work also proposes a machine learning model that classifies each Coding Tree Unit (CTU) as salient or not, and adjusts its quality accordingly. The results show that this model has an accuracy of 95% and correctly classifies as salient 98% of the CTUs that are actually salient.</summary>
    <dc:date>2026-03-26T12:00:04Z</dc:date>
  </entry>
  <entry>
    <title>Comparing V-Nova LCEVC SDK with Practical Open-Source Video Codecs</title>
    <link rel="alternate" href="https://hdl.handle.net/11000/39590" />
    <author>
      <name>Valera, María</name>
    </author>
    <author>
      <name>Rodríguez Sánchez, Rafael</name>
    </author>
    <author>
      <name>Cuenca, Pedro</name>
    </author>
    <author>
      <name>Cebrián Márquez, Gabriel</name>
    </author>
    <author>
      <name>Díaz Honrubia, Antonio Jesús</name>
    </author>
    <author>
      <name>García Lucas, David</name>
    </author>
    <id>https://hdl.handle.net/11000/39590</id>
    <updated>2026-03-27T02:06:47Z</updated>
    <published>2026-03-26T11:59:06Z</published>
    <summary type="text">Título : Comparing V-Nova LCEVC SDK with Practical Open-Source Video Codecs
Autor : Valera, María; Rodríguez Sánchez, Rafael; Cuenca, Pedro; Cebrián Márquez, Gabriel; Díaz Honrubia, Antonio Jesús; García Lucas, David
Resumen : This paper presents a comparative evaluation of the V-Nova LCEVC SDK against several practical open-source video encoders, namely SVT-AV1, XEVE, VVenC, x265, and x264. We analyze the trade-offs between the compression efficiency and encoder/decoder runtime of these encoders for high-resolution (UHD and HD) 10-bit consumer applications under a random access configuration. Rate–distortion behavior is assessed using Video Multimethod Assessment Fusion (VMAF, and VMAF-NEG) and Peak Signal-to-Noise Ratio (PSNR), while computational cost is measured through the encoder/decoder runtime. We also analyze the impact of LCEVC’s enhancement layer in terms of both bitrate increase and rate–distortion improvement. The results show that V-Nova LCEVC SDK delivers notable reductions in encoding time with respect to its base codecs, highlighting its suitability as a low-complexity enhancement layer. By comparison, VVenC exhibits a strong compression performance at the expense of high complexity, XEVE also displays considerable encoding times, and SVT-AV1 offers a more balanced compromise between efficiency and computational requirements.</summary>
    <dc:date>2026-03-26T11:59:06Z</dc:date>
  </entry>
  <entry>
    <title>A fast full partitioning algorithm for HEVC-to-VVC video transcoding using Bayesian classifiers</title>
    <link rel="alternate" href="https://hdl.handle.net/11000/39589" />
    <author>
      <name>García Lucas, David</name>
    </author>
    <author>
      <name>Cebrián Márquez, Gabriel</name>
    </author>
    <author>
      <name>Díaz Honrubia, Antonio Jesús</name>
    </author>
    <author>
      <name>Mallikarachchi, Thanuja</name>
    </author>
    <author>
      <name>Cuenca Castillo, Pedro Ángel</name>
    </author>
    <id>https://hdl.handle.net/11000/39589</id>
    <updated>2026-03-27T02:06:46Z</updated>
    <published>2026-03-26T11:58:15Z</published>
    <summary type="text">Título : A fast full partitioning algorithm for HEVC-to-VVC video transcoding using Bayesian classifiers
Autor : García Lucas, David; Cebrián Márquez, Gabriel; Díaz Honrubia, Antonio Jesús; Mallikarachchi, Thanuja; Cuenca Castillo, Pedro Ángel
Resumen : The Versatile Video Coding (VVC) standard was released in 2020 to replace the High Efficiency Video Coding (HEVC) standard, making it necessary to convert HEVC encoded content to VCC to exploit its compression performance, which was achieved by using a larger block size of 128 × 128 pixels, among other new coding tools. However, 80.93% of the encoding time is spent on finding a suitable block partitioning. To reduce this time, this proposal presents an HEVC-to-VVC transcoding algorithm focused on accelerating the CTU partitioning decisions. The transcoder takes different information from the input bitstream of HEVC, and feeds it to two Bayes-based models. Experimental results show a time saving in the transcoding process of 45.40%, compared with the traditional cascade transcoder. This time gain has been obtained on average for all test sequences in the Random Access scenario, at the expense of only 1.50% BD-rate.</summary>
    <dc:date>2026-03-26T11:58:15Z</dc:date>
  </entry>
  <entry>
    <title>Adaptive quadtree splitting parallelization (AQSP) algorithm for the VVC standard</title>
    <link rel="alternate" href="https://hdl.handle.net/11000/39588" />
    <author>
      <name>González Ruíz, Alberto</name>
    </author>
    <author>
      <name>Díaz Honrubia, Antonio Jesús</name>
    </author>
    <author>
      <name>Tapia Fernández, Santiago</name>
    </author>
    <author>
      <name>García Lucas, David</name>
    </author>
    <author>
      <name>Cebrián Márquez, Gabriel</name>
    </author>
    <author>
      <name>Mengual Galán, Luis</name>
    </author>
    <id>https://hdl.handle.net/11000/39588</id>
    <updated>2026-03-27T02:06:36Z</updated>
    <published>2026-03-26T11:57:22Z</published>
    <summary type="text">Título : Adaptive quadtree splitting parallelization (AQSP) algorithm for the VVC standard
Autor : González Ruíz, Alberto; Díaz Honrubia, Antonio Jesús; Tapia Fernández, Santiago; García Lucas, David; Cebrián Márquez, Gabriel; Mengual Galán, Luis
Resumen : The Versatile Video Coding (VVC) standard, also known as H.266, was released in 2020 as the natural successor to the High Efficiency Video Coding (HEVC) standard. Among its innovative coding tools, VVC extended the concept of quadtree (QT) splitting to the multi-type tree (MTT) structure, introducing binary and ternary partitions to enhance HEVC’s coding efficiency. While this brought significant compression improvements, it also resulted in a substantial increase in encoding time, primarily due to VVC’s larger Coding Tree Unit (CTU) size of 128x128 pixels. To mitigate this, this work introduces a flexible parallel approach for the QT traversal and splitting scheme of the VVC encoder, called adaptive quadtree splitting parallelization (AQSP) algorithm. This approach is based on the distribution of coding units (CUs) among different threads using the current depth level of the QT as a basis to minimize the number of broken dependencies. In this way, the algorithm achieves a good trade-off between time savings and coding efficiency. Experimental results show that, when compared with the original VVC encoder, AQSP achieves an acceleration factor of 2.04x with 4 threads at the expense of a low impact in terms of BD rate. These outcomes position AQSP competitively in comparison with other state-of-the-art approaches.</summary>
    <dc:date>2026-03-26T11:57:22Z</dc:date>
  </entry>
  <entry>
    <title>High-Quality Video Streaming Over Urban Vehicular Networks</title>
    <link rel="alternate" href="https://hdl.handle.net/11000/38132" />
    <author>
      <name>Piñol Peral, Pablo</name>
    </author>
    <author>
      <name>Garrido Abenza, Pedro Pablo</name>
    </author>
    <author>
      <name>Perez Malumbres, Manuel</name>
    </author>
    <author>
      <name>López-Granado, Otoniel</name>
    </author>
    <id>https://hdl.handle.net/11000/38132</id>
    <updated>2025-12-03T12:52:34Z</updated>
    <published>2025-11-12T08:45:07Z</published>
    <summary type="text">Título : High-Quality Video Streaming Over Urban Vehicular Networks
Autor : Piñol Peral, Pablo; Garrido Abenza, Pedro Pablo; Perez Malumbres, Manuel; López-Granado, Otoniel
Resumen : Video streaming services over vehicular ad-hoc networks (VANETs) are in high demand for numerous applications associated with the connected vehicle (infotainment, driver assistance, accident support, etc.). However, streaming high-quality video through a VANET is not a trivial task, as the wireless channel is highly unreliable and suffers from bandwidth constraints. As a consequence, many packets may be lost, making it very difficult for the receiver to reconstruct a video with the minimum quality required. Our proposed scheme will combine several aspects of the overall video streaming architecture by following a cross-layer approach that includes: (a) the video packet stream content characteristics, (b) an adaptive forward error correction coding scheme, and (c) the use of QoS services. An adaptive RaptorQ coding scheme is proposed to protect the video packet stream without wasting the available network bandwidth. At the same time, we will use the QoS differentiated services of IEEE 802.11p to prioritise critical video packets, in order to avoid degradation of video quality during streaming. Finally, we will provide a mechanism to reduce the impact of synchronisation effects on the IEEE 1609.4 multiplexed service channel, which will reduce the packet collisions at the beginning of the service channel slot. All of these techniques, when properly combined, will enable high-quality video streaming services in urban VANET scenarios, thus providing a pleasant video quality experience to users even under different network conditions, with moderate to high packet error rates. In order to test the performance of our proposal, we will use a highly detailed simulation framework under different network conditions. The results of this work are expected to provide a feasible solution for high-quality video streaming services in urban VANETs.</summary>
    <dc:date>2025-11-12T08:45:07Z</dc:date>
  </entry>
  <entry>
    <title>Perceptual QP optimization for VVC with dual hybrid neural&#xD;
networks</title>
    <link rel="alternate" href="https://hdl.handle.net/11000/36863" />
    <author>
      <name>Ruiz Atencia, Javier</name>
    </author>
    <author>
      <name>Lopez Granado, Otoniel</name>
    </author>
    <author>
      <name>Pérez Malumbres, Manuel</name>
    </author>
    <author>
      <name>Martínez-Rach, Miguel</name>
    </author>
    <id>https://hdl.handle.net/11000/36863</id>
    <updated>2025-07-15T01:04:25Z</updated>
    <published>2025-07-14T12:04:57Z</published>
    <summary type="text">Título : Perceptual QP optimization for VVC with dual hybrid neural&#xD;
networks
Autor : Ruiz Atencia, Javier; Lopez Granado, Otoniel; Pérez Malumbres, Manuel; Martínez-Rach, Miguel
Resumen : This paper introduces a dual hybrid neural network model combining convolu-&#xD;
tional neural networks (CNNs) and artificial neural networks (ANNs) to optimize&#xD;
&#xD;
the quantization parameter (QP) for both 64 × 64 and 32 × 32 blocks in the versatile&#xD;
video coding (VVC) standard, enhancing video quality and compression efficiency.&#xD;
The model employs CNNs for spatial feature extraction and ANNs for structured&#xD;
data handling, addressing the limitations of current heuristic and just noticeable&#xD;
distortion (JND)-based methods. A dataset of luminance channel image blocks,&#xD;
encoded with various QP values, is generated and preprocessed, and the dual hybrid&#xD;
&#xD;
network structure is designed with convolutional and dense layers. The QP optimi-&#xD;
zation is applied at two levels: the 64 × 64 model provides a global QP offset, while&#xD;
&#xD;
the 32 × 32 model refines the QP for further partitioned blocks. Performance evalu-&#xD;
ations using model error metrics like mean squared error (MSE), root mean squared&#xD;
&#xD;
error (RMSE), mean absolute error (MAE), as well as perceptual metrics like&#xD;
weighted PSNR (WPSNR), MS-SSIM, PSNR-HVS-M, and VMAF, demonstrate the&#xD;
&#xD;
model’s effectiveness. While our approach performs competitively with state-of-the-&#xD;
art algorithms, it significantly outperforms in VMAF, the most advanced and widely&#xD;
&#xD;
adopted perceptual quality metric. Furthermore, the dual-model approach yields bet-&#xD;
ter results at lower resolutions, whereas the single-model approach is more effective&#xD;
&#xD;
at higher resolutions. These results highlight the adaptability of the proposed mod-&#xD;
els, offering improvements in both compression efficiency and perceptual quality,&#xD;
&#xD;
making them highly suitable for practical applications in modern video coding.</summary>
    <dc:date>2025-07-14T12:04:57Z</dc:date>
  </entry>
  <entry>
    <title>Analysis of the Perceptual Quality Performance of Different HEVC Coding Tools</title>
    <link rel="alternate" href="https://hdl.handle.net/11000/36862" />
    <author>
      <name>Ruiz Atencia, Javier</name>
    </author>
    <author>
      <name>Lopez Granado, Otoniel</name>
    </author>
    <author>
      <name>Pérez Malumbres, Manuel</name>
    </author>
    <author>
      <name>Martínez-Rach, Miguel</name>
    </author>
    <author>
      <name>Van Wallendael, Glenn</name>
    </author>
    <id>https://hdl.handle.net/11000/36862</id>
    <updated>2025-07-15T01:04:24Z</updated>
    <published>2025-07-14T12:04:43Z</published>
    <summary type="text">Título : Analysis of the Perceptual Quality Performance of Different HEVC Coding Tools
Autor : Ruiz Atencia, Javier; Lopez Granado, Otoniel; Pérez Malumbres, Manuel; Martínez-Rach, Miguel; Van Wallendael, Glenn
Resumen : Each new video encoding standard includes encoding techniques that aim to improve the&#xD;
performance and quality of the previous standards. During the development of these techniques, PSNR was&#xD;
used as the main distortion metric. However, the PSNR metric does not consider the subjectivity of the human&#xD;
visual system, so that the performance of some coding tools is questionable from the perceptual point of view.&#xD;
To further explore this point, we have developed a detailed study about the perceptual sensibility of different&#xD;
HEVC video coding tools. In order to perform this study, we used some popular objective quality assessment&#xD;
metrics to measure the perceptual response of every single coding tool. The conclusion of this work will help&#xD;
to determine the set of HEVC coding tools that provides, in general, the best perceptual response.</summary>
    <dc:date>2025-07-14T12:04:43Z</dc:date>
  </entry>
  <entry>
    <title>A Hybrid Contrast and Texture Masking Model to Boost High Efficiency Video Coding Perceptual Rate-Distortion Performance</title>
    <link rel="alternate" href="https://hdl.handle.net/11000/36861" />
    <author>
      <name>Ruiz Atencia, Javier</name>
    </author>
    <author>
      <name>López- Granado, Otoniel</name>
    </author>
    <author>
      <name>Pérez Malumbres, Manuel</name>
    </author>
    <author>
      <name>Martínez-Rach, Miguel</name>
    </author>
    <author>
      <name>Ruiz Coll, Damián</name>
    </author>
    <author>
      <name>Fernández Escribano, Gerardo</name>
    </author>
    <author>
      <name>Van Wallendael, Glenn</name>
    </author>
    <id>https://hdl.handle.net/11000/36861</id>
    <updated>2025-07-15T01:04:23Z</updated>
    <published>2025-07-14T11:31:16Z</published>
    <summary type="text">Título : A Hybrid Contrast and Texture Masking Model to Boost High Efficiency Video Coding Perceptual Rate-Distortion Performance
Autor : Ruiz Atencia, Javier; López- Granado, Otoniel; Pérez Malumbres, Manuel; Martínez-Rach, Miguel; Ruiz Coll, Damián; Fernández Escribano, Gerardo; Van Wallendael, Glenn
Resumen : first_pageDownload PDFsettingsOrder Article Reprints&#xD;
Open AccessArticle&#xD;
A Hybrid Contrast and Texture Masking Model to Boost High Efficiency Video Coding Perceptual Rate-Distortion Performance&#xD;
by Javier Ruiz Atencia 1,*ORCID,Otoniel López-Granado 1,*ORCID,Manuel Pérez Malumbres 1ORCID,Miguel Martínez-Rach 1ORCID,Damian Ruiz Coll 2ORCID,Gerardo Fernández Escribano 3ORCID andGlenn Van Wallendael 4ORCID&#xD;
1&#xD;
Department Computer Engineering, Miguel Hernández University, 03202 Elche, Spain&#xD;
2&#xD;
Department of Signal and Communications Theory, Rey Juan Carlos University, 28933 Madrid, Spain&#xD;
3&#xD;
School of Industrial Engineering, University of Castilla-La Mancha, 13001 Albacete, Spain&#xD;
4&#xD;
IDLab-MEDIA, Ghent University—IMEC, B-9052 Ghent, Belgium&#xD;
*&#xD;
Authors to whom correspondence should be addressed.&#xD;
Electronics 2024, 13(16), 3341; https://doi.org/10.3390/electronics13163341&#xD;
Submission received: 27 May 2024 / Revised: 6 August 2024 / Accepted: 17 August 2024 / Published: 22 August 2024&#xD;
(This article belongs to the Special Issue Recent Advances in Image/Video Compression and Coding)&#xD;
Downloadkeyboard_arrow_down Browse Figures Review Reports Versions Notes&#xD;
Abstract&#xD;
As most of the videos are destined for human perception, many techniques have been designed to improve video coding based on how the human visual system perceives video quality. In this paper, we propose the use of two perceptual coding techniques, namely contrast masking and texture masking, jointly operating under the High Efficiency Video Coding (HEVC) standard. These techniques aim to improve the subjective quality of the reconstructed video at the same bit rate. For contrast masking, we propose the use of a dedicated weighting matrix for each block size (from 4×4&#xD;
 up to 32×32&#xD;
), unlike the HEVC standard, which only defines an 8×8&#xD;
 weighting matrix which it is upscaled to build the 16×16&#xD;
 and 32×32&#xD;
 weighting matrices (a 4×4&#xD;
 weighting matrix is not supported). Our approach achieves average Bjøntegaard Delta-Rate (BD-rate) gains of between 2.5%&#xD;
 and 4.48%&#xD;
, depending on the perceptual metric and coding mode used. On the other hand, we propose a novel texture masking scheme based on the classification of each coding unit to provide an over-quantization depending on the coding unit texture level. Thus, for each coding unit, its mean directional variance features are computed to feed a support vector machine model that properly predicts the texture type (plane, edge, or texture). According to this classification, the block’s energy, the type of coding unit, and its size, an over-quantization value is computed as a QP offset (DQP) to be applied to this coding unit. By applying both techniques in the HEVC reference software, an overall average of 5.79%&#xD;
 BD-rate gain is achieved proving their complementarity.</summary>
    <dc:date>2025-07-14T11:31:16Z</dc:date>
  </entry>
</feed>

