The Random Walk Blog

2024-12-05

Core Web Vitals: How to Improve LCP and CLS for Optimal Site Performance

Core Web Vitals: How to Improve LCP and CLS for Optimal Site Performance

Optimizing a website for performance is essential to enhance user experience and boost search engine rankings. Two critical metrics from Google’s Core Web Vitals (CWV)—Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS)—play a significant role in measuring and improving a site’s performance. These metrics outline the key strategies for optimization of Core Web Vitals and highlight the observed impact on both mobile and desktop performance.

Understanding Largest Contentful Paint (LCP)

LCP, a key metric in Core Web Vitals, measures the time it takes for the largest visible content element (such as an image, text block, or video) in the viewport to load and become visible. It serves as a crucial indicator of how quickly users perceive a page to be fully loaded in the Core Web Vitals metrics. The ideal target for LCP is less than 2.5 seconds.

Understanding Cumulative Layout Shift (CLS)

CLS, another key metric in Core Web Vitals, tracks unexpected layout shifts during a page's lifecycle. These shifts can cause frustration, especially when users are interacting with a page, such as clicking a button or reading text. The ideal target for CLS is less than 0.1.

Core Web Vitals: Strategies to Optimize LCP

Image Optimization: Compressing images and converting them to WebP format ensures the largest visible elements load quickly. This reduces the time needed for rendering the key visual components.

Implementing Lazy Loading: Applying the loading="lazy" attribute to non-critical images defers their loading until they appear in the viewport, prioritizing above-the-fold content.

Leveraging a Content Delivery Network (CDN): Using a CDN to deliver assets from servers closer to users reduces latency, improving performance across regions, particularly for mobile devices.

Minimizing Render-Blocking Resources: By minifying JavaScript and CSS and deferring non-essential resources, browsers can focus on loading critical content faster.

Desktop-Specific Optimization:

  • Preload key desktop fonts and images to prioritize their rendering.

  • Enable HTTP/2 to improve desktop asset loading efficiency.

  • Optimize the layout structure for larger screens to avoid overloading above-the-fold content with unnecessary elements.

Utilize Critical Path Rendering:

  • For desktop layouts, prioritize loading above-the-fold content by identifying critical CSS and inlining it directly in the page's .

  • Use server-side rendering (SSR) to pre-render content that takes longer to load, reducing the time to the first meaningful paint.

Desktop-Specific Image Scaling:

  • Serve larger, high-resolution images tailored for desktop screens while ensuring responsive breakpoints to prevent overloading resources.

  • Implement next-gen formats like AVIF for desktop images where quality is a priority.

Browser Caching for Static Assets: Leverage long-term caching strategies for static desktop assets (e.g., CSS, JavaScript) to ensure repeat visits load faster.

Core Web Vitals: Strategies to Optimize CLS

Setting Dimensions for Media Elements: Explicit width and height attributes were added to images and videos to reserve the necessary space in the layout before the content is loaded.

Reserving Ad Slots: Dedicated placeholders were implemented for ads, ensuring that their late loading does not disrupt the layout.

Font Loading Optimization: Applying the font-display: swap property for web fonts allowed text to remain visible while fonts loaded, eliminating layout jumps caused by font changes.

Refining Animations and Transitions: Transition effects that trigger layout recalculations were avoided, ensuring smoother interactions without unexpected shifts.

Desktop-Specific Refinements:

  • Reserve space for high-resolution assets commonly used on larger screens.

  • Use adaptive image scaling to maintain layout stability when switching between resolutions.

Dedicated Space for Widgets: Allocate fixed dimensions for interactive desktop widgets, such as chat boxes or product filters, ensuring their dynamic behavior does not cause layout shifts.

Custom Desktop Layouts: Design desktop layouts with grid systems to ensure predictable alignment and spacing, even when content dynamically updates.

Optimize Pagination and Infinite Scrolling: For desktops, ensure that pagination or infinite scrolling does not shift content unexpectedly. Use placeholders for newly loaded content to maintain layout stability.

Performance Improvements of Core Web Vitals

Tests conducted using tools like Google PageSpeed Insights and Web.dev revealed significant performance improvements in the Core Web Vitals after adopting these strategies.

Testing Methodology:

For example, we can evaluate a sample website with a focus on both mobile and desktop experiences. The website's key visual elements include high-quality images, interactive buttons, and dynamic ad content.

Ideal Targets of Core Web Vitals:

For LCP: Below 2.5 seconds.

For CLS: Below 0.1.

Mobile Results:

Before Optimization:

LCP: 4.2 seconds (Exceeds the ideal target).

CLS: 0.2 (Above acceptable levels).

mobile cls lcp fail.webp

After Optimization:

LCP improved to 2.3 seconds.

CLS reduced to 0.05, well within the ideal range.

mobile cls lcp passed.webp

Desktop Results:

Before Optimization:

LCP: 3.1 seconds (Room for improvement).

CLS: 0.12 (Acceptable but not optimal).

desktop cls lcp failed.webp

After Optimization:

LCP achieved 1.8 seconds, surpassing the ideal target.

CLS reduced to 0.01, eliminating noticeable layout shifts.

desktop cls lcp pased.webp

The improvements on desktop are particularly noteworthy due to enhanced CDN efficiency, strategic asset preloading, and layout adjustments tailored for larger viewports.

Recommendations for Continuous Optimization of Core Web Vitals

Prioritize Key Visual Elements: Optimize the largest visible content to meet LCP targets consistently.

Stabilize Layouts: Reserve space for all dynamic elements, such as ads and images, to minimize CLS issues.

Regular Monitoring and Testing: Use tools like Lighthouse and Web.dev frequently to track and refine Core Web Vitals’ performance metrics. Test various scenarios (e.g., heavy traffic, slow connections) to ensure stability.

Tailored Desktop Monitoring:

  • Perform separate testing for desktop and mobile platforms to address unique performance bottlenecks, such as larger screen resolutions and interactive elements.

  • Use desktop-specific test profiles in tools like Lighthouse to simulate slower network conditions, large viewport dimensions, and high-DPI screens.

Focus on Desktop-Only Features:

  • Optimize Core Web Vitals’ performance for desktop-exclusive features such as multi-column layouts, hover effects, and large-scale carousels.

  • Ensure these features are lightweight and do not hinder overall performance.

Integrate Real User Metrics: Utilize Real User Monitoring (RUM) tools to collect desktop-specific data on user behavior and identify issues impacting LCP and CLS.

Periodic Benchmarking: Compare your site's desktop performance against competitors and industry benchmarks to identify opportunities for further optimization.

By implementing these strategies, websites can deliver faster, more stable, and user-friendly experiences across all devices, significantly enhancing usability and search engine visibility.

Related Blogs

Refining and Creating Data Visualizations with LIDA

Microsoft’s Language-Integrated Data Analysis (LIDA) is a game-changer, offering an advanced framework to refine and enhance data visualizations with seamless integration, automation, and intelligence. Let’s explore the key features and applications of LIDA, and its transformative impact on the data visualization landscape. LIDA is a powerful library designed to effortlessly generate data visualizations and create data-driven infographics with precision. What makes LIDA stand out is its grammar-agnostic approach, enabling compatibility with various programming languages and visualization libraries, including popular ones like matplotlib, seaborn, altair, and d3. Plus, it seamlessly integrates with multiple large language model providers such as OpenAI, Azure OpenAI, PaLM, Cohere, and Huggingface.

Refining and Creating Data Visualizations with LIDA

From Frontend-Heavy to a Balanced Architecture: Enhancing System Efficiency

Building efficient and scalable applications often requires balancing responsibilities between the frontend and backend. When tasks like report generation are managed solely on the frontend, it can lead to performance bottlenecks, scalability issues, and user experience challenges. Transitioning to a balanced architecture can address these limitations while improving overall system efficiency.

From Frontend-Heavy to a Balanced Architecture: Enhancing System Efficiency

From Blinking LEDs to Real-Time AI: The Raspberry Pi’s Role in Innovation

The Raspberry Pi, launched in 2012, has entered the vocabulary of all doers and makers of the world. It was designed as an affordable, accessible microcomputer for students and hobbyists. Over the years, Raspberry Pi has evolved from a modest credit card-sized computer into a versatile platform that powers innovations in fields as diverse as home economics to IoT, AI, robotics and industrial automation. Raspberry Pis are single board computers that can be found in an assortment of variations with models ranging from anywhere between $4 to $70. Here, we’ll trace the journey of the Raspberry Pi’s evolution and explore some of the innovations that it has spurred with examples and code snippets.

From Blinking LEDs to Real-Time AI: The Raspberry Pi’s Role in Innovation

Exploring Different Text-to-Speech (TTS) Models: From Robotic to Natural Voices

Text-to-speech (TTS) technology has evolved significantly in the past few years, enabling one to convert simple text to spoken words with remarkable accuracy and naturalness. From simple robotic voices to sophisticated, human-like speech synthesis, models offer specialized capabilities applicable to different use cases. In this blog, we will explore how different TTS models generate speech from text as well as compare their capabilities, models explored include MARS-5, Parler-TTS, Tortoise-TTS, MetaVoice-1B, Coqui TTS among others. The TTS process generally involves several key steps discussed later in detail: input text and reference audio, text processing, voice synthesis and then the final audio is outputted. Some models enhance this process by supporting few-shot or zero-shot learning, where a new voice can be generated based on minimal reference audio. Let's delve into how some of the leading TTS models perform these tasks.

Exploring Different Text-to-Speech (TTS) Models: From Robotic to Natural Voices

A Beginner’s Guide to Automated Testing

A cursory prompt to chatGPT asking for guidance into the world of automated testing, spits out the words Selenium and Taiko. This blog post will explore our hands-on experience with these tools and share insights into how they performed in real-world testing scenarios. But first what is automated testing? Automated testing refers to the process of using specialized tools to run predefined tests on software applications automatically. It differs from manual testing, where human testers interact with the software to validate functionality and identify bugs. The key USPs of automated testing are efficiency in terms of multiple repeat runs of test cases, integration with CI/CD pipelines like Github actions and reliability.

A Beginner’s Guide to Automated Testing
Refining and Creating Data Visualizations with LIDA

Refining and Creating Data Visualizations with LIDA

Microsoft’s Language-Integrated Data Analysis (LIDA) is a game-changer, offering an advanced framework to refine and enhance data visualizations with seamless integration, automation, and intelligence. Let’s explore the key features and applications of LIDA, and its transformative impact on the data visualization landscape. LIDA is a powerful library designed to effortlessly generate data visualizations and create data-driven infographics with precision. What makes LIDA stand out is its grammar-agnostic approach, enabling compatibility with various programming languages and visualization libraries, including popular ones like matplotlib, seaborn, altair, and d3. Plus, it seamlessly integrates with multiple large language model providers such as OpenAI, Azure OpenAI, PaLM, Cohere, and Huggingface.

From Frontend-Heavy to a Balanced Architecture: Enhancing System Efficiency

From Frontend-Heavy to a Balanced Architecture: Enhancing System Efficiency

Building efficient and scalable applications often requires balancing responsibilities between the frontend and backend. When tasks like report generation are managed solely on the frontend, it can lead to performance bottlenecks, scalability issues, and user experience challenges. Transitioning to a balanced architecture can address these limitations while improving overall system efficiency.

From Blinking LEDs to Real-Time AI: The Raspberry Pi’s Role in Innovation

From Blinking LEDs to Real-Time AI: The Raspberry Pi’s Role in Innovation

The Raspberry Pi, launched in 2012, has entered the vocabulary of all doers and makers of the world. It was designed as an affordable, accessible microcomputer for students and hobbyists. Over the years, Raspberry Pi has evolved from a modest credit card-sized computer into a versatile platform that powers innovations in fields as diverse as home economics to IoT, AI, robotics and industrial automation. Raspberry Pis are single board computers that can be found in an assortment of variations with models ranging from anywhere between $4 to $70. Here, we’ll trace the journey of the Raspberry Pi’s evolution and explore some of the innovations that it has spurred with examples and code snippets.

Exploring Different Text-to-Speech (TTS) Models: From Robotic to Natural Voices

Exploring Different Text-to-Speech (TTS) Models: From Robotic to Natural Voices

Text-to-speech (TTS) technology has evolved significantly in the past few years, enabling one to convert simple text to spoken words with remarkable accuracy and naturalness. From simple robotic voices to sophisticated, human-like speech synthesis, models offer specialized capabilities applicable to different use cases. In this blog, we will explore how different TTS models generate speech from text as well as compare their capabilities, models explored include MARS-5, Parler-TTS, Tortoise-TTS, MetaVoice-1B, Coqui TTS among others. The TTS process generally involves several key steps discussed later in detail: input text and reference audio, text processing, voice synthesis and then the final audio is outputted. Some models enhance this process by supporting few-shot or zero-shot learning, where a new voice can be generated based on minimal reference audio. Let's delve into how some of the leading TTS models perform these tasks.

A Beginner’s Guide to Automated Testing

A Beginner’s Guide to Automated Testing

A cursory prompt to chatGPT asking for guidance into the world of automated testing, spits out the words Selenium and Taiko. This blog post will explore our hands-on experience with these tools and share insights into how they performed in real-world testing scenarios. But first what is automated testing? Automated testing refers to the process of using specialized tools to run predefined tests on software applications automatically. It differs from manual testing, where human testers interact with the software to validate functionality and identify bugs. The key USPs of automated testing are efficiency in terms of multiple repeat runs of test cases, integration with CI/CD pipelines like Github actions and reliability.

Additional

Your Random Walk Towards AI Begins Now