<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>DSpace Comunidad :</title>
    <link>https://hdl.handle.net/11000/30411</link>
    <description />
    <pubDate>Fri, 03 Apr 2026 20:29:53 GMT</pubDate>
    <dc:date>2026-04-03T20:29:53Z</dc:date>
    <item>
      <title>Saliency Dataset and Predictive Model for Areas of Interest in VVC Perceptual Coding</title>
      <link>https://hdl.handle.net/11000/39591</link>
      <description>Título : Saliency Dataset and Predictive Model for Areas of Interest in VVC Perceptual Coding
Autor : Kessler Martín, Jorge; Fernández Lagos, Pablo; García Lucas, David; Cebrián Márquez, Gabriel; Ríos, Belén; Vigueras, Guillermo; Díaz Honrubia, Antonio Jesús
Resumen : Video coding standardization organizations have invested significant efforts in achieving greater compression factors over the years. Approved in 2020, the Versatile Video Coding (VVC) standard reduces the bit rate needed to encode a sequence by half compared to its predecessor. However, users today have increasingly demanding requirements, leading to a significant rise in video traffic on the Internet. In this context, perceptual video coding aims to reduce video bit rate by decreasing the objective quality while maintaining the subjective quality. This work presents a novel dataset designed for training models to predict video saliency, i.e., areas in the video to which viewers are more likely to pay attention. The dataset is publicly available. Furthermore, this work also proposes a machine learning model that classifies each Coding Tree Unit (CTU) as salient or not, and adjusts its quality accordingly. The results show that this model has an accuracy of 95% and correctly classifies as salient 98% of the CTUs that are actually salient.</description>
      <pubDate>Thu, 26 Mar 2026 12:00:04 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39591</guid>
      <dc:date>2026-03-26T12:00:04Z</dc:date>
    </item>
    <item>
      <title>Comparing V-Nova LCEVC SDK with Practical Open-Source Video Codecs</title>
      <link>https://hdl.handle.net/11000/39590</link>
      <description>Título : Comparing V-Nova LCEVC SDK with Practical Open-Source Video Codecs
Autor : Valera, María; Rodríguez Sánchez, Rafael; Cuenca, Pedro; Cebrián Márquez, Gabriel; Díaz Honrubia, Antonio Jesús; García Lucas, David
Resumen : This paper presents a comparative evaluation of the V-Nova LCEVC SDK against several practical open-source video encoders, namely SVT-AV1, XEVE, VVenC, x265, and x264. We analyze the trade-offs between the compression efficiency and encoder/decoder runtime of these encoders for high-resolution (UHD and HD) 10-bit consumer applications under a random access configuration. Rate–distortion behavior is assessed using Video Multimethod Assessment Fusion (VMAF, and VMAF-NEG) and Peak Signal-to-Noise Ratio (PSNR), while computational cost is measured through the encoder/decoder runtime. We also analyze the impact of LCEVC’s enhancement layer in terms of both bitrate increase and rate–distortion improvement. The results show that V-Nova LCEVC SDK delivers notable reductions in encoding time with respect to its base codecs, highlighting its suitability as a low-complexity enhancement layer. By comparison, VVenC exhibits a strong compression performance at the expense of high complexity, XEVE also displays considerable encoding times, and SVT-AV1 offers a more balanced compromise between efficiency and computational requirements.</description>
      <pubDate>Thu, 26 Mar 2026 11:59:06 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39590</guid>
      <dc:date>2026-03-26T11:59:06Z</dc:date>
    </item>
    <item>
      <title>A fast full partitioning algorithm for HEVC-to-VVC video transcoding using Bayesian classifiers</title>
      <link>https://hdl.handle.net/11000/39589</link>
      <description>Título : A fast full partitioning algorithm for HEVC-to-VVC video transcoding using Bayesian classifiers
Autor : García Lucas, David; Cebrián Márquez, Gabriel; Díaz Honrubia, Antonio Jesús; Mallikarachchi, Thanuja; Cuenca Castillo, Pedro Ángel
Resumen : The Versatile Video Coding (VVC) standard was released in 2020 to replace the High Efficiency Video Coding (HEVC) standard, making it necessary to convert HEVC encoded content to VCC to exploit its compression performance, which was achieved by using a larger block size of 128 × 128 pixels, among other new coding tools. However, 80.93% of the encoding time is spent on finding a suitable block partitioning. To reduce this time, this proposal presents an HEVC-to-VVC transcoding algorithm focused on accelerating the CTU partitioning decisions. The transcoder takes different information from the input bitstream of HEVC, and feeds it to two Bayes-based models. Experimental results show a time saving in the transcoding process of 45.40%, compared with the traditional cascade transcoder. This time gain has been obtained on average for all test sequences in the Random Access scenario, at the expense of only 1.50% BD-rate.</description>
      <pubDate>Thu, 26 Mar 2026 11:58:15 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39589</guid>
      <dc:date>2026-03-26T11:58:15Z</dc:date>
    </item>
    <item>
      <title>Adaptive quadtree splitting parallelization (AQSP) algorithm for the VVC standard</title>
      <link>https://hdl.handle.net/11000/39588</link>
      <description>Título : Adaptive quadtree splitting parallelization (AQSP) algorithm for the VVC standard
Autor : González Ruíz, Alberto; Díaz Honrubia, Antonio Jesús; Tapia Fernández, Santiago; García Lucas, David; Cebrián Márquez, Gabriel; Mengual Galán, Luis
Resumen : The Versatile Video Coding (VVC) standard, also known as H.266, was released in 2020 as the natural successor to the High Efficiency Video Coding (HEVC) standard. Among its innovative coding tools, VVC extended the concept of quadtree (QT) splitting to the multi-type tree (MTT) structure, introducing binary and ternary partitions to enhance HEVC’s coding efficiency. While this brought significant compression improvements, it also resulted in a substantial increase in encoding time, primarily due to VVC’s larger Coding Tree Unit (CTU) size of 128x128 pixels. To mitigate this, this work introduces a flexible parallel approach for the QT traversal and splitting scheme of the VVC encoder, called adaptive quadtree splitting parallelization (AQSP) algorithm. This approach is based on the distribution of coding units (CUs) among different threads using the current depth level of the QT as a basis to minimize the number of broken dependencies. In this way, the algorithm achieves a good trade-off between time savings and coding efficiency. Experimental results show that, when compared with the original VVC encoder, AQSP achieves an acceleration factor of 2.04x with 4 threads at the expense of a low impact in terms of BD rate. These outcomes position AQSP competitively in comparison with other state-of-the-art approaches.</description>
      <pubDate>Thu, 26 Mar 2026 11:57:22 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/39588</guid>
      <dc:date>2026-03-26T11:57:22Z</dc:date>
    </item>
    <item>
      <title>High-Quality Video Streaming Over Urban Vehicular Networks</title>
      <link>https://hdl.handle.net/11000/38132</link>
      <description>Título : High-Quality Video Streaming Over Urban Vehicular Networks
Autor : Piñol Peral, Pablo; Garrido Abenza, Pedro Pablo; Perez Malumbres, Manuel; López-Granado, Otoniel
Resumen : Video streaming services over vehicular ad-hoc networks (VANETs) are in high demand for numerous applications associated with the connected vehicle (infotainment, driver assistance, accident support, etc.). However, streaming high-quality video through a VANET is not a trivial task, as the wireless channel is highly unreliable and suffers from bandwidth constraints. As a consequence, many packets may be lost, making it very difficult for the receiver to reconstruct a video with the minimum quality required. Our proposed scheme will combine several aspects of the overall video streaming architecture by following a cross-layer approach that includes: (a) the video packet stream content characteristics, (b) an adaptive forward error correction coding scheme, and (c) the use of QoS services. An adaptive RaptorQ coding scheme is proposed to protect the video packet stream without wasting the available network bandwidth. At the same time, we will use the QoS differentiated services of IEEE 802.11p to prioritise critical video packets, in order to avoid degradation of video quality during streaming. Finally, we will provide a mechanism to reduce the impact of synchronisation effects on the IEEE 1609.4 multiplexed service channel, which will reduce the packet collisions at the beginning of the service channel slot. All of these techniques, when properly combined, will enable high-quality video streaming services in urban VANET scenarios, thus providing a pleasant video quality experience to users even under different network conditions, with moderate to high packet error rates. In order to test the performance of our proposal, we will use a highly detailed simulation framework under different network conditions. The results of this work are expected to provide a feasible solution for high-quality video streaming services in urban VANETs.</description>
      <pubDate>Wed, 12 Nov 2025 08:45:07 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/38132</guid>
      <dc:date>2025-11-12T08:45:07Z</dc:date>
    </item>
    <item>
      <title>Perceptual QP optimization for VVC with dual hybrid neural&#xD;
networks</title>
      <link>https://hdl.handle.net/11000/36863</link>
      <description>Título : Perceptual QP optimization for VVC with dual hybrid neural&#xD;
networks
Autor : Ruiz Atencia, Javier; Lopez Granado, Otoniel; Pérez Malumbres, Manuel; Martínez-Rach, Miguel
Resumen : This paper introduces a dual hybrid neural network model combining convolu-&#xD;
tional neural networks (CNNs) and artificial neural networks (ANNs) to optimize&#xD;
&#xD;
the quantization parameter (QP) for both 64 × 64 and 32 × 32 blocks in the versatile&#xD;
video coding (VVC) standard, enhancing video quality and compression efficiency.&#xD;
The model employs CNNs for spatial feature extraction and ANNs for structured&#xD;
data handling, addressing the limitations of current heuristic and just noticeable&#xD;
distortion (JND)-based methods. A dataset of luminance channel image blocks,&#xD;
encoded with various QP values, is generated and preprocessed, and the dual hybrid&#xD;
&#xD;
network structure is designed with convolutional and dense layers. The QP optimi-&#xD;
zation is applied at two levels: the 64 × 64 model provides a global QP offset, while&#xD;
&#xD;
the 32 × 32 model refines the QP for further partitioned blocks. Performance evalu-&#xD;
ations using model error metrics like mean squared error (MSE), root mean squared&#xD;
&#xD;
error (RMSE), mean absolute error (MAE), as well as perceptual metrics like&#xD;
weighted PSNR (WPSNR), MS-SSIM, PSNR-HVS-M, and VMAF, demonstrate the&#xD;
&#xD;
model’s effectiveness. While our approach performs competitively with state-of-the-&#xD;
art algorithms, it significantly outperforms in VMAF, the most advanced and widely&#xD;
&#xD;
adopted perceptual quality metric. Furthermore, the dual-model approach yields bet-&#xD;
ter results at lower resolutions, whereas the single-model approach is more effective&#xD;
&#xD;
at higher resolutions. These results highlight the adaptability of the proposed mod-&#xD;
els, offering improvements in both compression efficiency and perceptual quality,&#xD;
&#xD;
making them highly suitable for practical applications in modern video coding.</description>
      <pubDate>Mon, 14 Jul 2025 12:04:57 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/36863</guid>
      <dc:date>2025-07-14T12:04:57Z</dc:date>
    </item>
    <item>
      <title>Analysis of the Perceptual Quality Performance of Different HEVC Coding Tools</title>
      <link>https://hdl.handle.net/11000/36862</link>
      <description>Título : Analysis of the Perceptual Quality Performance of Different HEVC Coding Tools
Autor : Ruiz Atencia, Javier; Lopez Granado, Otoniel; Pérez Malumbres, Manuel; Martínez-Rach, Miguel; Van Wallendael, Glenn
Resumen : Each new video encoding standard includes encoding techniques that aim to improve the&#xD;
performance and quality of the previous standards. During the development of these techniques, PSNR was&#xD;
used as the main distortion metric. However, the PSNR metric does not consider the subjectivity of the human&#xD;
visual system, so that the performance of some coding tools is questionable from the perceptual point of view.&#xD;
To further explore this point, we have developed a detailed study about the perceptual sensibility of different&#xD;
HEVC video coding tools. In order to perform this study, we used some popular objective quality assessment&#xD;
metrics to measure the perceptual response of every single coding tool. The conclusion of this work will help&#xD;
to determine the set of HEVC coding tools that provides, in general, the best perceptual response.</description>
      <pubDate>Mon, 14 Jul 2025 12:04:43 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/36862</guid>
      <dc:date>2025-07-14T12:04:43Z</dc:date>
    </item>
    <item>
      <title>A Hybrid Contrast and Texture Masking Model to Boost High Efficiency Video Coding Perceptual Rate-Distortion Performance</title>
      <link>https://hdl.handle.net/11000/36861</link>
      <description>Título : A Hybrid Contrast and Texture Masking Model to Boost High Efficiency Video Coding Perceptual Rate-Distortion Performance
Autor : Ruiz Atencia, Javier; López- Granado, Otoniel; Pérez Malumbres, Manuel; Martínez-Rach, Miguel; Ruiz Coll, Damián; Fernández Escribano, Gerardo; Van Wallendael, Glenn
Resumen : first_pageDownload PDFsettingsOrder Article Reprints&#xD;
Open AccessArticle&#xD;
A Hybrid Contrast and Texture Masking Model to Boost High Efficiency Video Coding Perceptual Rate-Distortion Performance&#xD;
by Javier Ruiz Atencia 1,*ORCID,Otoniel López-Granado 1,*ORCID,Manuel Pérez Malumbres 1ORCID,Miguel Martínez-Rach 1ORCID,Damian Ruiz Coll 2ORCID,Gerardo Fernández Escribano 3ORCID andGlenn Van Wallendael 4ORCID&#xD;
1&#xD;
Department Computer Engineering, Miguel Hernández University, 03202 Elche, Spain&#xD;
2&#xD;
Department of Signal and Communications Theory, Rey Juan Carlos University, 28933 Madrid, Spain&#xD;
3&#xD;
School of Industrial Engineering, University of Castilla-La Mancha, 13001 Albacete, Spain&#xD;
4&#xD;
IDLab-MEDIA, Ghent University—IMEC, B-9052 Ghent, Belgium&#xD;
*&#xD;
Authors to whom correspondence should be addressed.&#xD;
Electronics 2024, 13(16), 3341; https://doi.org/10.3390/electronics13163341&#xD;
Submission received: 27 May 2024 / Revised: 6 August 2024 / Accepted: 17 August 2024 / Published: 22 August 2024&#xD;
(This article belongs to the Special Issue Recent Advances in Image/Video Compression and Coding)&#xD;
Downloadkeyboard_arrow_down Browse Figures Review Reports Versions Notes&#xD;
Abstract&#xD;
As most of the videos are destined for human perception, many techniques have been designed to improve video coding based on how the human visual system perceives video quality. In this paper, we propose the use of two perceptual coding techniques, namely contrast masking and texture masking, jointly operating under the High Efficiency Video Coding (HEVC) standard. These techniques aim to improve the subjective quality of the reconstructed video at the same bit rate. For contrast masking, we propose the use of a dedicated weighting matrix for each block size (from 4×4&#xD;
 up to 32×32&#xD;
), unlike the HEVC standard, which only defines an 8×8&#xD;
 weighting matrix which it is upscaled to build the 16×16&#xD;
 and 32×32&#xD;
 weighting matrices (a 4×4&#xD;
 weighting matrix is not supported). Our approach achieves average Bjøntegaard Delta-Rate (BD-rate) gains of between 2.5%&#xD;
 and 4.48%&#xD;
, depending on the perceptual metric and coding mode used. On the other hand, we propose a novel texture masking scheme based on the classification of each coding unit to provide an over-quantization depending on the coding unit texture level. Thus, for each coding unit, its mean directional variance features are computed to feed a support vector machine model that properly predicts the texture type (plane, edge, or texture). According to this classification, the block’s energy, the type of coding unit, and its size, an over-quantization value is computed as a QP offset (DQP) to be applied to this coding unit. By applying both techniques in the HEVC reference software, an overall average of 5.79%&#xD;
 BD-rate gain is achieved proving their complementarity.</description>
      <pubDate>Mon, 14 Jul 2025 11:31:16 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/36861</guid>
      <dc:date>2025-07-14T11:31:16Z</dc:date>
    </item>
    <item>
      <title>Performance, limitations, and design issues of the&#xD;
integration of a hardware-based IME module&#xD;
with HEVC video encoder software</title>
      <link>https://hdl.handle.net/11000/36860</link>
      <description>Título : Performance, limitations, and design issues of the&#xD;
integration of a hardware-based IME module&#xD;
with HEVC video encoder software
Autor : López Granado, Otoniel; Migallón, Héctor; Alcocer, Estefanía; Gutiérrez, Roberto; Van Wallendael, Glenn; Malumbres, Manuel
Resumen : High Efficiency Video Coding (HEVC) was designed to improve on its predecessor, the&#xD;
H264/AVC standard, by doubling its compression efficiency. As in previous standards, motion estimation&#xD;
is critical for encoders to achieve significant compression gains. However, the cost of accurately removing&#xD;
temporal redundancy in video is prohibitive, especially when encoding very high resolution video sequences.&#xD;
To reduce the overall video encoding time, we have proposed the implementation of an HEVC motion&#xD;
estimation block in hardware, which can achieve significant speed-ups. However, when the IP hardware&#xD;
is integrated into a software platform, there are several constraints and limitations that reduce its impact on&#xD;
the overall encoding time. In this paper, we analyse these issues in detail to identify the main bottlenecks&#xD;
of the overall software/hardware encoding system. From this analysis, we propose a final integration of the&#xD;
hardware motion estimation module with a hardware unit combined with the slice-based parallel version of&#xD;
the HEVC encoding software. The resulting integrated version is able to achieve the best performance in&#xD;
terms of global speed-up, up to 149.63x compared to the sequential version of the HEVC encoder using the&#xD;
full search motion estimation algorithm.</description>
      <pubDate>Mon, 14 Jul 2025 11:30:29 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/36860</guid>
      <dc:date>2025-07-14T11:30:29Z</dc:date>
    </item>
    <item>
      <title>Modeling an Edge Computing Arithmetic Framework for IoT Environments</title>
      <link>https://hdl.handle.net/11000/35621</link>
      <description>Título : Modeling an Edge Computing Arithmetic Framework for IoT Environments
Autor : Roig, Pedro Juan; Alcaraz, Salvador; Gilly, Katja; Bernad, Cristina; Juiz, Carlos
Resumen : IoT environments are forecasted to grow exponentially in the coming years thanks to&#xD;
the recent advances in both edge computing and artificial intelligence. In this paper, a model of&#xD;
remote computing scheme is presented, where three layers of computing nodes are put in place&#xD;
in order to optimize the computing and forwarding tasks. In this sense, a generic layout has been&#xD;
designed so as to easily achieve communications among the diverse layers by means of simple&#xD;
arithmetic operations, which may result in saving resources in all nodes involved. Traffic forwarding&#xD;
is undertaken by means of forwarding tables within network devices, which need to be searched&#xD;
upon in order to find the proper destination, and that process may be resource-consuming as the&#xD;
number of entries in such tables grow. However, the arithmetic framework proposed may speed up&#xD;
the traffic forwarding decisions as relaying on integer divisions and modular arithmetic, which may&#xD;
result more straightforward. Furthermore, two diverse approaches have been proposed to formally&#xD;
describe such a design by means of coding with Spin/Promela, or otherwise, by using an algebraic&#xD;
approach with Algebra of Communicating Processes (ACP), resulting in a explosion state for the&#xD;
former and a specified and verified model in the latter.</description>
      <pubDate>Wed, 12 Feb 2025 08:37:24 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/11000/35621</guid>
      <dc:date>2025-02-12T08:37:24Z</dc:date>
    </item>
  </channel>
</rss>

