<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Murshid Muzamil</title>
    <description>Writings and notes on software, business, technology, and building things on the web.</description>
    <link>https://murshidm.github.io/</link>
    <atom:link href="https://murshidm.github.io/feed.xml" rel="self" type="application/rss+xml" />
    
      <item>
        <title>Enhancing an Agentic Wiki with Graphify for Semantic Reasoning</title>
        <description>&lt;p&gt;I have been experimenting with a Karpathy style LLM maintained wiki for small business knowledge management. The system is structured as a Markdown based knowledge base where agents maintain concepts, entities, summaries, and syntheses across a strict directory taxonomy.&lt;/p&gt;

&lt;p&gt;This approach has been effective for organizing knowledge into a clean, navigable structure that agents can reliably use for summarization and retrieval.&lt;/p&gt;

&lt;p&gt;However, while the wiki provides strong &lt;em&gt;explicit structure&lt;/em&gt;, it is limited by its reliance on manually defined links and human imposed categorization.&lt;/p&gt;

&lt;p&gt;I am now exploring Graphify as a complementary semantic layer to enhance relationship discovery, retrieval quality, and multi hop reasoning.&lt;/p&gt;

&lt;h2 id=&quot;the-wiki-approach&quot;&gt;The Wiki Approach&lt;/h2&gt;

&lt;p&gt;The current system is inspired by the LLM wiki pattern used in agent based knowledge systems.&lt;/p&gt;

&lt;h3 id=&quot;core-structure&quot;&gt;Core Structure&lt;/h3&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;concepts/&lt;/code&gt; → business ideas and frameworks&lt;br /&gt;
  &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;entities/&lt;/code&gt; → organizations, legal structures, tools&lt;br /&gt;
  &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;summaries/&lt;/code&gt; → distilled knowledge from raw sources&lt;br /&gt;
  &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;syntheses/&lt;/code&gt; → comparative reasoning and decision frameworks&lt;br /&gt;
  &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;journal/&lt;/code&gt; → observations and experiments&lt;/p&gt;

&lt;p&gt;Each page includes:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;YAML frontmatter metadata&lt;/li&gt;
  &lt;li&gt;strict naming conventions&lt;/li&gt;
  &lt;li&gt;explicit wiki style linking&lt;/li&gt;
  &lt;li&gt;agent maintained updates&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;strengths&quot;&gt;Strengths&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Highly interpretable Markdown structure&lt;/li&gt;
  &lt;li&gt;Deterministic navigation via explicit links&lt;/li&gt;
  &lt;li&gt;Stable, curated knowledge representation&lt;/li&gt;
  &lt;li&gt;Strong alignment with LLM summarization workflows&lt;/li&gt;
  &lt;li&gt;Easy to audit and maintain&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;limitations&quot;&gt;Limitations&lt;/h3&gt;

&lt;p&gt;Despite its structure, the system has inherent constraints:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Relationships must be explicitly written&lt;/li&gt;
  &lt;li&gt;Cross domain connections are often missed&lt;/li&gt;
  &lt;li&gt;Multi hop reasoning is limited&lt;/li&gt;
  &lt;li&gt;Semantic similarity is not computed&lt;/li&gt;
  &lt;li&gt;Knowledge remains partially fragmented across folders&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short: the wiki is a &lt;strong&gt;structured knowledge repository&lt;/strong&gt;, not a fully connected semantic system.&lt;/p&gt;

&lt;h2 id=&quot;introducing-graphify&quot;&gt;Introducing Graphify&lt;/h2&gt;

&lt;p&gt;Graphify introduces a different paradigm: a &lt;strong&gt;semantic knowledge graph extracted from documents and relationships&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/safishamsi/graphify&quot; target=&quot;_blank&quot;&gt;Graphify&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of organizing knowledge into pages, it builds:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;nodes (concepts, entities, documents)&lt;/li&gt;
  &lt;li&gt;edges (relationships between them)&lt;/li&gt;
  &lt;li&gt;inferred semantic links&lt;/li&gt;
  &lt;li&gt;confidence scored associations&lt;/li&gt;
  &lt;li&gt;clustered communities of related ideas&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Graphify processes information through multiple passes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;structural extraction (documents, code, metadata)&lt;/li&gt;
  &lt;li&gt;semantic inference (LLM based relationships)&lt;/li&gt;
  &lt;li&gt;media transcription (audio/video)&lt;/li&gt;
  &lt;li&gt;graph merging and clustering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The output is a &lt;strong&gt;connected knowledge network&lt;/strong&gt;, not a document tree.&lt;/p&gt;

&lt;h2 id=&quot;what-graphify-adds-to-the-wiki-system&quot;&gt;What Graphify Adds to the Wiki System&lt;/h2&gt;

&lt;p&gt;Graphify does not replace the wiki. It enhances it by adding a hidden semantic layer.&lt;/p&gt;

&lt;h3 id=&quot;1-discovery-of-implicit-relationships&quot;&gt;1. Discovery of Implicit Relationships&lt;/h3&gt;

&lt;p&gt;The wiki only contains explicit links such as:&lt;/p&gt;

&lt;p&gt;[[startup]]
  [[small business financing]]&lt;/p&gt;

&lt;p&gt;Graphify can infer missing relationships:&lt;/p&gt;

&lt;p&gt;For example,&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;startup → undercapitalization risk&lt;/li&gt;
  &lt;li&gt;franchising → alternative to entrepreneurship&lt;/li&gt;
  &lt;li&gt;financing → constraint on marketing capacity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are often not explicitly encoded but are critical for reasoning.&lt;/p&gt;

&lt;h3 id=&quot;2-cross-domain-connectivity&quot;&gt;2. Cross Domain Connectivity&lt;/h3&gt;

&lt;p&gt;The wiki separates knowledge into structured domains:&lt;/p&gt;

&lt;p&gt;concepts/
  entities/
  syntheses/&lt;/p&gt;

&lt;p&gt;Graphify ignores folder boundaries and connects across them:&lt;/p&gt;

&lt;p&gt;legal structures ↔ tax implications ↔ financing models&lt;br /&gt;
  marketing ↔ customer acquisition ↔ cash flow&lt;br /&gt;
  operations ↔ scalability ↔ cost structure&lt;/p&gt;

&lt;p&gt;This produces a unified semantic view of the system.&lt;/p&gt;

&lt;h3 id=&quot;3-multi-hop-reasoning-for-agents&quot;&gt;3. Multi Hop Reasoning for Agents&lt;/h3&gt;

&lt;p&gt;Graphify enables traversal based reasoning:&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;small business&lt;br /&gt;
→ undercapitalization&lt;br /&gt;
→ financing gaps&lt;br /&gt;
→ startup failure patterns&lt;/p&gt;

&lt;p&gt;This allows agents to perform reasoning chains instead of isolated retrieval.&lt;/p&gt;

&lt;h3 id=&quot;4-conflict-detection-and-consistency-checking&quot;&gt;4. Conflict Detection and Consistency Checking&lt;/h3&gt;

&lt;p&gt;Graphify can identify contradictions across documents:&lt;/p&gt;

&lt;p&gt;one synthesis may claim franchising reduces risk&lt;br /&gt;
  another may highlight operational rigidity risks&lt;/p&gt;

&lt;p&gt;These relationships can be flagged as:
  conflicting
  uncertain
  context dependent&lt;/p&gt;

&lt;p&gt;This improves the reliability of the knowledge base.&lt;/p&gt;

&lt;h3 id=&quot;5-semantic-retrieval-enhancement&quot;&gt;5. Semantic Retrieval Enhancement&lt;/h3&gt;

&lt;p&gt;Instead of retrieving isolated Markdown pages, Graphify enables:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;entity based retrieval&lt;/li&gt;
  &lt;li&gt;neighborhood expansion (k hop traversal)&lt;/li&gt;
  &lt;li&gt;community based clustering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This significantly improves retrieval context for agents.&lt;/p&gt;

&lt;h3 id=&quot;6-emergent-structure-discovery&quot;&gt;6. Emergent Structure Discovery&lt;/h3&gt;

&lt;p&gt;Graphify can reveal hidden patterns:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;clusters of marketing related concepts&lt;/li&gt;
  &lt;li&gt;financing dependency networks&lt;/li&gt;
  &lt;li&gt;operational bottlenecks across business types&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This helps validate and refine the existing taxonomy.&lt;/p&gt;

&lt;h2 id=&quot;how-the-two-systems-complement-each-other&quot;&gt;How the Two Systems Complement Each Other&lt;/h2&gt;

&lt;p&gt;The wiki and Graphify serve different roles:&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Layer&lt;/td&gt;
      &lt;td&gt;Role&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Karpathy style Wiki&lt;/td&gt;
      &lt;td&gt;Structured, curated knowledge representation&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Graphify&lt;/td&gt;
      &lt;td&gt;Semantic relationship and inference layer&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Agents&lt;/td&gt;
      &lt;td&gt;Reasoning + summarization layer&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h3 id=&quot;combined-system-behavior&quot;&gt;Combined System Behavior&lt;/h3&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;The wiki provides:
  structured definitions&lt;br /&gt;
  curated explanations&lt;br /&gt;
  explicit taxonomy&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Graphify adds:
  implicit relationships&lt;br /&gt;
  cross domain links&lt;br /&gt;
  semantic clustering&lt;br /&gt;
  multi hop connectivity&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Agents use both:
  wiki for grounded explanations&lt;br /&gt;
  graph for deep contextual reasoning&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;why-this-matters-for-business-knowledge&quot;&gt;Why This Matters for Business Knowledge&lt;/h2&gt;

&lt;p&gt;Business systems are inherently relational:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;financing affects marketing&lt;/li&gt;
  &lt;li&gt;legal structure affects taxation&lt;/li&gt;
  &lt;li&gt;customer acquisition affects cash flow&lt;/li&gt;
  &lt;li&gt;operations affect scalability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A wiki captures these concepts individually.&lt;/p&gt;

&lt;p&gt;A graph captures how they interact.&lt;/p&gt;

&lt;p&gt;For agent based systems, these relationships are often more important than isolated definitions.&lt;/p&gt;
</description>
        <pubDate>Mon, 11 May 2026 00:00:00 +0000</pubDate>
        <link>https://murshidm.github.io/ai/agents/knowledge%20systems/graphs/2026/05/11/agents-wiki-graphify-semantic-layer/</link>
        <guid isPermaLink="true">https://murshidm.github.io/ai/agents/knowledge%20systems/graphs/2026/05/11/agents-wiki-graphify-semantic-layer/</guid>
      </item>
    
      <item>
        <title>Agents Think in Markdown, Humans Prefer HTML</title>
        <description>&lt;p&gt;The funny thing about the viral HTML vs Markdown post from &lt;a href=&quot;https://x.com/trq212/status/2052809885763747935&quot;&gt;Thariq&lt;/a&gt; (Anthropic) is that it’s basically stating something incredibly obvious, but packaging it in a way that suddenly makes everyone collectively realize it at the same time.&lt;/p&gt;

&lt;p&gt;Of course HTML is better for humans. Of course agents should output visually structured artifacts instead of giant Markdown walls. Of course a browser is a better interface for consuming complex AI-generated information than endlessly scrolling through a 500-line &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.md&lt;/code&gt; file. None of this is actually surprising.&lt;/p&gt;

&lt;p&gt;The core idea underneath the entire article is almost hilariously simple: feed agents Markdown-like structure internally, then render the final output as HTML for humans.&lt;/p&gt;

&lt;p&gt;That’s really it.&lt;/p&gt;

&lt;p&gt;A fancy post to explain a very straightforward idea.&lt;/p&gt;

&lt;p&gt;But the reason it went viral is because it quietly summarizes what agents are becoming good at. Not just answering questions, but organizing information, structuring complexity, and presenting ideas in ways that are actually pleasant to consume.&lt;/p&gt;

&lt;p&gt;And honestly, the examples in the article make the point very well.&lt;/p&gt;

&lt;p&gt;Things like:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;interactive specs with tabs and diagrams&lt;/li&gt;
  &lt;li&gt;code reviews with flowcharts and annotations&lt;/li&gt;
  &lt;li&gt;visual diff explorers&lt;/li&gt;
  &lt;li&gt;mockups and UI prototypes&lt;/li&gt;
  &lt;li&gt;reports synthesized from Slack, Git history, and documents&lt;/li&gt;
  &lt;li&gt;temporary interfaces for editing prompts or configs&lt;/li&gt;
  &lt;li&gt;export buttons and interactive controls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these are fundamentally new capabilities. The underlying intelligence is often the same model generating the same information it would have generated in Markdown anyway.&lt;/p&gt;

&lt;p&gt;The difference is the presentation layer.&lt;/p&gt;

&lt;p&gt;Once agents become capable enough to synthesize large amounts of context, presentation suddenly matters much more than before. An ugly Markdown dump feels cognitively heavy, even if the information itself is good. The exact same output rendered with diagrams, visual hierarchy, cards, timelines, previews, tabs, and interactive controls suddenly feels significantly more intelligent and usable.&lt;/p&gt;

&lt;p&gt;In many cases, the reasoning barely changed at all.&lt;/p&gt;

&lt;p&gt;The interface changed.&lt;/p&gt;

&lt;p&gt;And browsers are still the best medium we’ve ever built for information density and navigation. HTML was always going to become the natural output layer once agents became capable of producing things larger than simple chat replies.&lt;/p&gt;

&lt;p&gt;So the article itself is not technically revolutionary. But it captures something important about where agent workflows are heading.&lt;/p&gt;

&lt;p&gt;Agents are increasingly useful not just because they “know” things, but because they can take messy, overwhelming information and reshape it into formats humans can actually understand, navigate, share, and work with comfortably.&lt;/p&gt;

&lt;p&gt;That’s probably why the post resonated so much. It articulated an obvious idea that many people were already experiencing intuitively while working with agents every day.&lt;/p&gt;
</description>
        <pubDate>Sun, 10 May 2026 00:00:00 +0000</pubDate>
        <link>https://murshidm.github.io/ai/agents/ux/2026/05/10/agents-markdown-humans-html/</link>
        <guid isPermaLink="true">https://murshidm.github.io/ai/agents/ux/2026/05/10/agents-markdown-humans-html/</guid>
      </item>
    
      <item>
        <title>The Business Brain: Organizing Small Business Chaos with AI Agents</title>
        <description>&lt;p&gt;Small businesses are still massively under-discussed when it comes to AI transformation.&lt;/p&gt;

&lt;p&gt;Most of them don’t feel like systems. They feel like habits stitched together with spreadsheets, inboxes, and informal decisions. Work gets done, but only because people constantly bridge the gaps.&lt;/p&gt;

&lt;p&gt;Invoices in Excel. Customer updates in WhatsApp. Support handled from memory. Pricing exceptions decided case by case. It functions, but it’s fragile.&lt;/p&gt;

&lt;p&gt;The opportunity for AI isn’t just automation. It’s structure.&lt;/p&gt;

&lt;p&gt;Agentic workflows aren’t valuable only because they execute tasks faster. Their real value is that they can observe how work actually happens, surface patterns, and reveal where knowledge is fragmented or inconsistent.&lt;/p&gt;

&lt;p&gt;Most inefficiency in small businesses isn’t dramatic failure. It’s repetition without structure:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The same invoice corrections happening repeatedly&lt;/li&gt;
  &lt;li&gt;Customer replies rewritten from scratch&lt;/li&gt;
  &lt;li&gt;Refund decisions varying by staff member&lt;/li&gt;
  &lt;li&gt;SOPs that exist but aren’t consistently applied&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI agents can begin to expose these patterns. Not just act, but map how work flows and where it breaks.&lt;/p&gt;

&lt;p&gt;That leads to a useful framing: the &lt;strong&gt;Business Brain&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A Business Brain isn’t a chatbot over documents or a search layer across tools. It’s a living representation of how a company operates its decisions, processes, and context continuously updated through real activity.&lt;/p&gt;

&lt;p&gt;It connects actions across systems and preserves operational memory so that work becomes consistent, not dependent on who happens to be involved.&lt;/p&gt;

&lt;p&gt;Underneath this shift is a simple constraint: models are no longer the bottleneck.&lt;/p&gt;

&lt;p&gt;The bottleneck is domain knowledge.&lt;/p&gt;

&lt;p&gt;In most companies, critical knowledge is scattered Slack messages, email threads, spreadsheets, support tickets, and individual memory. Humans act as the integration layer between all of it.&lt;/p&gt;

&lt;p&gt;AI agents don’t naturally operate in that fragmented environment. They require structure: explicit processes, accessible context, and operational memory rather than passive documentation.&lt;/p&gt;

&lt;p&gt;This is why many agent-focused systems, including those discussed in the YC ecosystem, converge on similar ideas around persistent context, tool use, and organizational memory.&lt;/p&gt;

&lt;p&gt;The direction is consistent: AI that doesn’t just respond, but operates across workflows with continuity.&lt;/p&gt;

&lt;p&gt;One YC framing that captures this well is that the real blocker to AI automation is not model capability, but fragmented company knowledge spread across tools and people:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://youtu.be/IaWIazkWWog?si=LIWz3eGLZhSmVUAo&quot;&gt;Business Brain&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The implication is straightforward:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI doesn’t just automate work it forces work to become structured.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For small businesses, this is the real unlock. Not surface-level automation, but the transformation of messy, informal operations into systems that can be understood, improved, and reliably executed.&lt;/p&gt;

&lt;p&gt;That’s where efficiency gains appear. That’s where leakage gets eliminated. And that’s where the Business Brain becomes meaningful not as a feature, but as an operating layer for the company.&lt;/p&gt;
</description>
        <pubDate>Sat, 09 May 2026 00:00:00 +0000</pubDate>
        <link>https://murshidm.github.io/ai/business/operations/2026/05/09/the-business-brain-organizing-small-business-chaos-with-ai-agents/</link>
        <guid isPermaLink="true">https://murshidm.github.io/ai/business/operations/2026/05/09/the-business-brain-organizing-small-business-chaos-with-ai-agents/</guid>
      </item>
    
      <item>
        <title>Why Wiki-style Knowledge is Better Than RAG for Agents</title>
        <description>&lt;p&gt;I’ve been thinking about this a lot lately, and I keep ending up in the same place: for most agent workflows, the wiki-style knowledge structure just feels more natural and durable than classic RAG.&lt;/p&gt;

&lt;p&gt;RAG had its moment and it still works but in practice it often feels like a temporary architectural compromise. You chunk documents, embed them, retrieve top-k, and hope the answer is somewhere in that semantic neighborhood. It’s clever, but it’s also a bit fragile. Context gets fragmented, relevance is probabilistic, and the system never really “understands” how knowledge connects.&lt;/p&gt;

&lt;p&gt;The wiki pattern feels different. Instead of treating knowledge as scattered embeddings to be fetched on demand, you structure it like a living system of interconnected pages more like how humans actually maintain shared understanding. Agents don’t just retrieve snippets; they navigate a knowledge graph that can evolve, accumulate structure, and support reasoning over time.&lt;/p&gt;

&lt;p&gt;What’s interesting is that the industry is slowly drifting in this direction. Even tools like Pinecone’s Nexus are pointing toward more agent-native knowledge systems rather than pure vector retrieval. You can see the shift away from “search for context” toward “maintain and reason over a structured memory.”&lt;/p&gt;

&lt;p&gt;There’s also a growing discourse around this in the agent community. Andrej Karpathy has been vocal about the idea that LLMs paired with persistent, structured knowledge bases almost wiki-like in nature are a more scalable direction than naive RAG pipelines. Not as a rejection of retrieval, but as an upgrade in how knowledge is organized and maintained over time.&lt;/p&gt;

&lt;p&gt;Karpathy method:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.mindstudio.ai/blog/andrej-karpathy-llm-wiki-knowledge-base-claude-code&quot;&gt;Andrej Karpathy: LLM wiki knowledge base discussion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To me, the key difference is this: RAG is about finding knowledge, while wiki-style systems are about accumulating and structuring it. And for agents that are supposed to operate over long time horizons, the latter starts to matter a lot more.&lt;/p&gt;

&lt;p&gt;RAG wasn’t wrong it just feels increasingly like an intermediate step.&lt;/p&gt;
</description>
        <pubDate>Fri, 08 May 2026 00:00:00 +0000</pubDate>
        <link>https://murshidm.github.io/ai/agents/knowledge%20management/2026/05/08/why-wiki-style-knowledge-is-better-than-rag-for-agents/</link>
        <guid isPermaLink="true">https://murshidm.github.io/ai/agents/knowledge%20management/2026/05/08/why-wiki-style-knowledge-is-better-than-rag-for-agents/</guid>
      </item>
    
      <item>
        <title>What&apos;s new in Safari 17</title>
        <description>&lt;p&gt;Apple has unveiled Safari 17, its latest browser release that delivers a significant upgrade to browsing technology across its desktop and mobile platforms.&lt;/p&gt;

&lt;h2 id=&quot;wwdc23-announcements&quot;&gt;WWDC23 announcements&lt;/h2&gt;

&lt;p&gt;The announcement was done at Apple’s &lt;a href=&quot;https://developer.apple.com/videos/wwdc2023/&quot;&gt;Worldwide Developers Conference (WWDC23)&lt;/a&gt;, where a host of exciting Apple product updates were showcased.&lt;/p&gt;

&lt;p&gt;While the event attracted significant attention due to the next-generation hardware announcements by Apple, it is equally important to underscore the notable advancements announced for Safari.&lt;/p&gt;

&lt;p&gt;In the latest WebKit open-source blog post, Apple revealed that Safari 17 will introduce an impressive &lt;a href=&quot;https://webkit.org/blog/14205/news-from-wwdc23-webkit-features-in-safari-17-beta/&quot;&gt;88 new web features&lt;/a&gt;. Key among those features are integration of Web Apps on Mac (🎉), Spatial Web elements for immersive spatial computing, improved Media streaming capabilities and exciting updates for render elements.&lt;/p&gt;

&lt;p&gt;Today we will delve into the key features that are specifically designed to elevate and optimize web performance.&lt;/p&gt;

&lt;p&gt;Its important to note that currently the features are available only via &lt;a href=&quot;https://developer.apple.com/safari/technology-preview/&quot;&gt;Safari Technology Preview&lt;/a&gt;, an experimental version of Apple’s Safari browser for developers, while the official version upgrade is anticipated to be released for all users in the upcoming fall.&lt;/p&gt;

&lt;h3 id=&quot;jpeg-xl&quot;&gt;JPEG XL&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://jpeg.org/jpegxl/&quot;&gt;JPEG XL&lt;/a&gt; is a groundbreaking modern image format. Safari 17 will be the first browser to support JPEG XL officially.&lt;/p&gt;

&lt;p&gt;JPEG XL offers the capability to recompress images without compromising data, leading to a significant reduction in file size of up to 60%. This advancement promises improved image quality and reduced transfer size, benefiting web performance and user experience.&lt;/p&gt;

&lt;p&gt;What sets JPEG XL apart is its progressive loading capability, enabling users to start viewing images before the entire file is downloaded.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/safari17-jxl.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;While it competes with AVIF and WebP, this new image format aims to become the unifying solution to raster images on the web, as similar to what SVG is to vectors.&lt;/p&gt;

&lt;h3 id=&quot;use-case&quot;&gt;Use case&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Compression&lt;/strong&gt; Superior image quality and compression ratios compared to legacy JPEG. The compression ratios achieved by JPEG XL are superior to those of both AVIF and WebP, as the file size can be reduced up to 1/50th of the original size.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Smooth transition&lt;/strong&gt; JPEG is the most popular web image format. Transcoding JPEG images to JPEG XL is seamless and lossless, ensuring compatibility with existing JPEG-based applications.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Optimization&lt;/strong&gt; The codec can be optimized for specific criteria. For example retaining high fidelity (i.e. Compatiblity with Professional photography needs), or for encoding speed (i.e. Faster image generation) or for compression ratio (i.e. Smaller files).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Its a point to note that each format brings its own set of features and optimizations to the table, aiming to strike the perfect balance between image quality and file size.&lt;/p&gt;

&lt;p&gt;While there is wide adoption for &lt;a href=&quot;https://caniuse.com/?search=AVIF&quot;&gt;AVIF&lt;/a&gt; and &lt;a href=&quot;https://caniuse.com/?search=webp&quot;&gt;WebP&lt;/a&gt;, &lt;a href=&quot;https://caniuse.com/?search=JPEG-XL&quot;&gt;JPEG XL&lt;/a&gt; was trailing. Competing formats are bad for web standards, but it would be interesting to see the growth and development of this image format with the support of Apple.&lt;/p&gt;

&lt;h2 id=&quot;preconnect-via-http-early-hints&quot;&gt;Preconnect via HTTP Early Hints&lt;/h2&gt;

&lt;p&gt;Safari 17 brings a welcome performance feature by introducing support for &lt;a href=&quot;https://webkit.org/blog/14205/news-from-wwdc23-webkit-features-in-safari-17-beta/#networking&quot;&gt;preconnect&lt;/a&gt; via HTTP Early Hints. While this functionality was previously limited to Chromium-based browsers, it is now being experimented with in Safari&lt;/p&gt;

&lt;p&gt;While Safari has previously supported other Early hints such as DNS prefetch (desktop), prefetch (experimental), and preload (desktop and mobile), the introduction of preconnect in Safari 17 further enhances the browser’s performance capabilities.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&amp;lt;link rel=&quot;preconnect&quot; href=&quot;https://fonts.googleapis.com&quot;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;With preconnect, browsers have the ability to establish connections with external resources, like servers and APIs, ahead of time. This is particularly useful when a webpage includes references to external libraries or resources, such as Google Fonts.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/safari17-google.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Image 1: An example of preconnect tag to Google fonts servers&lt;/p&gt;

&lt;p&gt;When a browser requests a resource, it does a DNS lookup, establishes a TCP connection and employs TLS encryption to ensure secure transmission. With preconnect, the browser can establish this early lookup and be prepared to accept the response while it idles until it receives a server response.&lt;/p&gt;

&lt;p&gt;Testing by Cloudflare has demonstrated a noteworthy 30% enhancement in page load time for initial website visits using Early hints. This underscores the significance of widespread browser adoption for better web performance.&lt;/p&gt;

&lt;h2 id=&quot;fetch-priority&quot;&gt;Fetch priority&lt;/h2&gt;

&lt;p&gt;Fetch Priority is a markup-based signal that enables developers to prioritize resources on their websites.This was only available on Chromium and now comes to Safari as well.&lt;/p&gt;

&lt;p&gt;As opposed to server/network Early hints, this is primarily a browser API based signal.&lt;/p&gt;

&lt;p&gt;Each browser has its own prioritization system for different file types. By default for example, Chrome assigns high priority to CSS files as they are considered render blocking resources, while non-blocking JavaScript and images are set to low priority.&lt;/p&gt;

&lt;p&gt;It can be primarily used to boost the priority of the Largest Contentful Paint (LCP) image by specifying fetchpriority=”high” on the image element. This causes the LCP to happen sooner, resulting in a faster website load time.&lt;/p&gt;

&lt;p&gt;LCP being a core web vital, this is one of the popular ways web developers utilize this feature.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&amp;lt;!-- Fetch the LCP image on high priority --&amp;gt;
&amp;lt;img src=&quot;lcp.jpg&quot; fetchpriority=&quot;high&quot;&amp;gt;

&amp;lt;!-- Lower the priority of a below the fold image --&amp;gt;
&amp;lt;img src=&quot;-image.jpg&quot; fetchpriority=&quot;low&quot;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For Safari it is unclear how fetchpriority will be implemented at this point, given the browser resource prioritization. But there is a detailed guide on how &lt;a href=&quot;https://web.dev/fetch-priority/&quot;&gt;Google Chrome&lt;/a&gt; handles this feature.&lt;/p&gt;

&lt;h2 id=&quot;why-safari-matters&quot;&gt;Why Safari matters&lt;/h2&gt;

&lt;p&gt;Safari’s last major upgrade was version 16, which was released for macOS Monterey and macOS Big Sur on September 2022. It was also shipped on iOS 16 which covered iPhone and iPad devices.&lt;/p&gt;

&lt;p&gt;When considering the usage of web browsers, Safari holds a notable position, although it falls well behind Google Chrome in terms of global browser market share. (Desktop 12.85% , Mobile 27.8% - &lt;a href=&quot;https://gs.statcounter.com/browser-market-share/desktop/worldwide&quot;&gt;Source&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;While prioritizing Chrome is essential for enhancing web performance due to its market dominance, it is equally crucial to recognize Safari’s substantial presence on mobile devices.&lt;/p&gt;

&lt;p&gt;Thus, if your target audience predominantly uses Safari, it is crucial to dedicate efforts to test, improve, and optimize your website specifically for Safari users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Links&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Safari 17 Beta Release Notes &lt;a href=&quot;https://developer.apple.com/documentation/safari-release-notes/safari-17-release-notes&quot;&gt;https://developer.apple.com/documentation/safari-release-notes/safari-17-release-notes&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Comparison of next-Gen image formats &lt;a href=&quot;https://cloudinary.com/blog/time_for_next_gen_codecs_to_dethrone_jpeg&quot;&gt;https://cloudinary.com/blog/time_for_next_gen_codecs_to_dethrone_jpeg&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Safari supported media formats &lt;a href=&quot;https://developer.apple.com/videos/play/wwdc2023/10122&quot;&gt;https://developer.apple.com/videos/play/wwdc2023/10122&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Wed, 07 Jun 2023 00:00:00 +0000</pubDate>
        <link>https://murshidm.github.io/browser/web%20performance/2023/06/07/whats-new-in-safari-17/</link>
        <guid isPermaLink="true">https://murshidm.github.io/browser/web%20performance/2023/06/07/whats-new-in-safari-17/</guid>
      </item>
    
      <item>
        <title>Back Forward cache: What you need to know</title>
        <description>&lt;p&gt;Imagine you’re browsing an online store, looking at different products. You click on an item to view its details but then decide to go back and continue exploring. Thanks to bfcache, when you return to the previous page, it instantly appears with all the information and images loaded, without having to wait for it to load again from the server.&lt;/p&gt;

&lt;h2 id=&quot;what-is-bfcache&quot;&gt;What is bfcache?&lt;/h2&gt;

&lt;p&gt;Back/forward cache (bfcache) is a feature of web browsers that allows pages to be restored from memory after the user has navigated away from them and then back again. This can improve the performance of web pages by reducing the number of times that pages need to be loaded from the server.&lt;/p&gt;

&lt;p&gt;This seamless experience saved you time, bandwidth and made your shopping journey smoother.&lt;/p&gt;

&lt;p&gt;Let’s delve into the details of this feature, which is supported by &lt;a href=&quot;https://caniuse.com/?search=bfcache&quot;&gt;modern browsers&lt;/a&gt;, including mobile devices.&lt;/p&gt;

&lt;h2 id=&quot;how-does-bfcache-work&quot;&gt;How does bfcache work?&lt;/h2&gt;

&lt;p&gt;Bfcache operates by saving a comprehensive snapshot of a page, including the JavaScript heap, in memory as the user navigates away. With the complete page stored in memory, the browser can swiftly and effortlessly restore it when the user chooses to return.&lt;/p&gt;

&lt;p&gt;This snapshot encompasses the HTML, CSS, JavaScript, and all other resources essential for rendering the page. Subsequently, if the user navigates away from and subsequently back to the page, the browser can restore it from memory instead of fetching it again from the server.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/bfcache-what-stored.jpg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;pagetransitionevent&quot;&gt;PageTransitionEvent&lt;/h3&gt;

&lt;p&gt;The PageTransitionEvent plays a role in signaling page transitions, including when a page is being unloaded or when a new page is being loaded. When a user navigates away from a page, such as by clicking a link or using the browser’s back or forward button, the PageTransitionEvent is triggered to notify the page that it is being transitioned out of view.&lt;/p&gt;

&lt;p&gt;It encompasses various event types, including “pageshow”, “pagehide”, “beforeunload” and “unload” each corresponding to a specific stage in the page transition process.&lt;/p&gt;

&lt;p&gt;This offers developers a way to observe and respond to page transitions within the browser, enabling them to enhance user experiences, implement custom behaviors, and optimize resource management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The unload event has been available in web browsers for a long time and is triggered when a page is being unloaded, either by navigating away or closing the browser window. On the other hand, the pagehide event was introduced later and provides an additional opportunity for developers to perform actions or cleanup tasks before a page is hidden or unloaded.&lt;/p&gt;

&lt;h2 id=&quot;how-can-i-make-my-pages-eligible-for-bfcache&quot;&gt;How can I make my pages eligible for bfcache?&lt;/h2&gt;

&lt;p&gt;There are a few things that you can do to make your pages eligible for bfcache:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Avoid using the unload event&lt;/strong&gt;
The unload event is fired when the user is about to navigate away from a page. If you use the unload event to do anything that might affect the page’s state, such as saving data or closing a modal dialog, then the page will not be eligible for bfcache.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Avoid attaching event listeners to the beforeunload event&lt;/strong&gt;
The beforeunload event is fired when the user is about to navigate away from a page and gives the page a chance to ask the user if they’re sure they want to leave. If you attach an event listener to the beforeunload event and the user clicks “Cancel,” then the page will not be eligible for bfcache.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Do not make any network requests after the user has navigated away from your page&lt;/strong&gt;
If you make any network requests after the user has navigated away from your page, then the browser will not be able to restore the page from memory. This is because the browser will not know what resources are needed to render the page.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If there are any unload event listeners on your page, you should convert them to pagehide event listeners&lt;/p&gt;

&lt;h2 id=&quot;how-can-i-check-if-my-pages-are-eligible-for-bfcache&quot;&gt;How can I check if my pages are eligible for bfcache?&lt;/h2&gt;

&lt;p&gt;You can check if your pages are eligible for bfcache using the Chrome DevTools. To do this, open the DevTools and go to the Applications tab. In the Cache section, look for the Back/forward cache entry. If your pages are eligible for bfcache, they will be listed in this section.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/bfcache-check-1.jpg&quot; alt=&quot;&quot; /&gt;
&lt;img src=&quot;/images/bfcache-check-2.jpg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;You can find more information about testing page optimization for instant loads and identifying issues that may affect eligibility for back-forward cache in the &lt;a href=&quot;https://developer.chrome.com/docs/devtools/application/back-forward-cache/&quot;&gt;Chrome DevTools documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;limitations-of-bfcache&quot;&gt;Limitations of bfcache&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Size&lt;/strong&gt;
The size of the bfcache is limited by the amount of memory available on the device. When the bfcache is full, the browser will start to evict pages that have not been visited recently. This means that pages that are frequently visited or that contain a lot of resources may not be cached if the bfcache is full.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Persistence&lt;/strong&gt;
The bfcache is not persistent. When the browser is closed, the bfcache is cleared. This means that any pages that are stored in the bfcache will be lost when the browser is closed.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Security&lt;/strong&gt;
The bfcache is not secure. Any sensitive information that is stored in the bfcache can be accessed by anyone who has access to the device. This means that sensitive information, such as passwords or credit card numbers, should not be stored in the bfcache.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;what-are-the-benefits&quot;&gt;What are the benefits?&lt;/h2&gt;

&lt;p&gt;Bfcache can improve the performance of web pages by reducing the number of times that pages need to be loaded from the server. This can lead to faster page load times, which can improve the user experience.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Improved performance&lt;/strong&gt;
bfcache can significantly improve the performance of web pages, especially for pages that are frequently visited or that contain a lot of resources. By storing a copy of the page in memory, the browser can avoid having to make a new request to the server, which can save time.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Reduced bandwidth usage&lt;/strong&gt;
bfcache can also help to reduce bandwidth usage by the browser. When a page is stored in bfcache, the browser does not have to download the page from the server again when the user navigates back to it. This can save bandwidth for both the user and the host.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Better user experience&lt;/strong&gt;
bfcache can provide a better user experience by making it faster and easier for users to navigate back to pages that they have recently visited. This can help to improve the overall usability of a website and make it more enjoyable for users to visit.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;key-observations&quot;&gt;Key observations&lt;/h2&gt;

&lt;p&gt;In her recent talk on &lt;a href=&quot;https://www.youtube.com/watch?v=bZV7XxsCNb8&amp;amp;t=647s&quot;&gt;How to Prioritize Web Performance Optimizations&lt;/a&gt; by Melissa Ada, she presented some interesting observations with regard to bfcache.
&lt;img src=&quot;/images/bfcache-ada-1.jpg&quot; alt=&quot;&quot; /&gt;
(Slide from &lt;a href=&quot;https://www.welovespeed.com/assets/docs/2023/melissa-ada-prioritizing-web-werformance-pptimizations.pdf&quot;&gt;Melissa Ada&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;The sharp increase in CLS scores for mobile devices in 2022 is believed to be closely related to the introduction of bfcache. The Web Almanac team suspects this as one of the primary factors contributing to the trend.&lt;/p&gt;

&lt;p&gt;You can refer to the &lt;a href=&quot;https://almanac.httparchive.org/en/2022/performance#bfcache-eligibility&quot;&gt;Web Almanac report&lt;/a&gt; for 2022 to explore the detailed analysis and reasoning behind this correlation. It aligns with the &lt;a href=&quot;https://web.dev/bfcache/#test-to-ensure-your-pages-are-cacheable&quot;&gt;launch&lt;/a&gt; of bfcache on Chrome in late 2021.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Bfcache eligibility criteria (100+) &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1li0po_ETJAIybpaSX5rW_lUN62upQhY0tH4pR5UPt60/edit#gid=0&quot;&gt;https://docs.google.com/spreadsheets/d/1li0po_ETJAIybpaSX5rW_lUN62upQhY0tH4pR5UPt60/edit#gid=0&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Tue, 16 May 2023 00:00:00 +0000</pubDate>
        <link>https://murshidm.github.io/browser/web%20performance/2023/05/16/bfcache-what-you-need-to-know/</link>
        <guid isPermaLink="true">https://murshidm.github.io/browser/web%20performance/2023/05/16/bfcache-what-you-need-to-know/</guid>
      </item>
    
      <item>
        <title>Understanding how browsers identify the LCP element</title>
        <description>&lt;p&gt;Largest Contentful Paint (LCP) is a key performance metric that measures the loading speed of a webpage and is part of the Google Web Vitals initiative, which was introduced in May 2020.&lt;/p&gt;

&lt;p&gt;Specifically, LCP measures the time it takes for the largest visible element on a page to become fully loaded and rendered in the viewport.&lt;/p&gt;

&lt;p&gt;This element could be an image, video, or another content element that occupies a significant amount of screen real estate.&lt;/p&gt;

&lt;h2 id=&quot;how-does-chrome-determine-the-largest-contentful-paint-lcp-element&quot;&gt;How does Chrome determine the Largest Contentful Paint (LCP) Element?&lt;/h2&gt;

&lt;p&gt;Here are the general steps that Chrome takes to determine the LCP element:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The browser begins to load the webpage and parses the HTML code to construct the Document Object Model (DOM) and the Render Object Model (ROM).&lt;/li&gt;
  &lt;li&gt;As the page loads, the browser searches for all visible block-level or replaced elements (such as images or videos) &lt;strong&gt;within the viewport&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;For each eligible element, the browser calculates the size of the render box, which takes into account the element’s width, height, padding, and border. This calculation doesn’t include the margin or size of any child elements.&lt;/li&gt;
  &lt;li&gt;The browser selects the element with the largest render box size as the candidate for Largest Contentful Paint (LCP). This element must also be loaded via an HTTP request and be visible in the viewport at the time of measurement.&lt;/li&gt;
  &lt;li&gt;After identifying the LCP candidate, the browser measures the time it takes for the element to become fully visible in the viewport. This occurs when the element’s content is rendered and its layout is stable.&lt;/li&gt;
  &lt;li&gt;The time it takes for the largest contentful paint (LCP) element to become fully visible is recorded as the LCP value for the webpage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It’s important to note that the LCP value may &lt;a href=&quot;https://web.dev/lcp/#when-is-largest-contentful-paint-reported&quot;&gt;change&lt;/a&gt; as the page continues to load, and the final LCP value is typically the one recorded when the page finishes loading or when the user interacts with the page.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/lcp-1.png.webp&quot; alt=&quot;&quot; /&gt;
Figure 1: LCP element changed after the page is fully loaded&lt;/p&gt;

&lt;h2 id=&quot;factors-that-could-affect-lcp-value&quot;&gt;Factors that could affect LCP value&lt;/h2&gt;

&lt;p&gt;As per Google, the LCP metric can be primarily affected by these four factors,&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Slow server response times&lt;/li&gt;
  &lt;li&gt;Render-blocking JavaScript and CSS&lt;/li&gt;
  &lt;li&gt;Resource load times&lt;/li&gt;
  &lt;li&gt;Client-side rendering&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Slow server response times can cause delays in loading content, while render-blocking JavaScript and CSS can prevent the browser from rendering content until they’re fully loaded.&lt;/p&gt;

&lt;p&gt;Resource load times refer to the time taken for the browser to download and process images, videos, and other media, while client-side rendering involves processing and rendering content on the user’s device.&lt;/p&gt;

&lt;h3 id=&quot;mobile-and-desktop-may-have-different-lcp-elements&quot;&gt;Mobile and Desktop may have different LCP elements&lt;/h3&gt;

&lt;p&gt;Mobile devices and desktops have different screen sizes and resolutions, which can affect how a webpage is rendered and which elements are considered the largest on the page.&lt;/p&gt;

&lt;p&gt;For example, an image that takes up a significant portion of a desktop screen may not take up as much space on a mobile screen, where the LCP element might instead be a text block or a button.&lt;/p&gt;

&lt;p&gt;Additionally, mobile devices often have slower network speeds and processing power compared to desktops, which can impact how quickly the LCP element is loaded and rendered on the screen.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/lcp-2.png.webp&quot; alt=&quot;&quot; /&gt;
Figure 2: LCP element for &lt;a href=&quot;http://dailymail.co.uk/&quot;&gt;dailymail.co.uk&lt;/a&gt; is different on mobile and desktop&lt;/p&gt;

&lt;h3 id=&quot;common-lcp-elements-on-a-webpage&quot;&gt;Common LCP elements on a webpage&lt;/h3&gt;

&lt;p&gt;Google specifies what specific elements may be considered as LCP. Refer &lt;a href=&quot;https://web.dev/lcp/#what-elements-are-considered&quot;&gt;https://web.dev/lcp/#what-elements-are-considered&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Below are some common elements in web pages that are eligible:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Hero section image or video&lt;/li&gt;
  &lt;li&gt;Background image
Background images which are loaded via the &lt;a href=&quot;https://developer.mozilla.org/docs/Web/CSS/url()&quot;&gt;url()&lt;/a&gt; function&lt;/li&gt;
  &lt;li&gt;Video thumbnail&lt;/li&gt;
  &lt;li&gt;Image carousel&lt;/li&gt;
  &lt;li&gt;Text
Text can also be an LCP element, especially if it is the primary content on a webpage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;img src=&quot;/images/lcp-3.png.webp&quot; alt=&quot;&quot; /&gt;
Figure 3: LCP element can defer according to a specific website&lt;/p&gt;

&lt;h3 id=&quot;what-elements-may-not-be-considered&quot;&gt;What elements may not be considered&lt;/h3&gt;

&lt;p&gt;It’s understood that not all elements are considered candidates for LCP. For instance, elements that are not visible within the viewport, such as elements located at the bottom of the page or behind other elements, are not considered.&lt;/p&gt;

&lt;p&gt;Additionally, elements that are dynamically generated, such as ads or pop-ups, may not be considered if they are not loaded via an HTTP request.&lt;/p&gt;

&lt;p&gt;While background images can be an LCP element, solid background colours or patterns are typically not considered since they don’t require an HTTP request to load.&lt;/p&gt;

&lt;p&gt;Lazy-loaded content may not be applicable as LCP as well. Lazy-loading is a technique used to defer the loading of non-critical resources until they are needed, such as images or videos that are below the fold.&lt;/p&gt;

&lt;p&gt;Since the lazy load resources are not loaded immediately, they may not be considered part of the LCP calculation.&lt;/p&gt;

&lt;p&gt;SVG elements on a page although the largest visible element may not be considered for LCP as well.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/lcp-4.png.webp&quot; alt=&quot;&quot; /&gt;
Figure 4: A larger SVG element may be ignored&lt;/p&gt;

&lt;h3 id=&quot;how-to-find-the-lcp-element-on-my-webpage&quot;&gt;How to find the LCP element on my webpage&lt;/h3&gt;

&lt;p&gt;To replicate and test the LCP (Largest Contentful Paint) values of your webpage, you can run audits using an online tool like Google’s PageSpeed Insights, GTMetrix and Webpagetest.&lt;/p&gt;

&lt;p&gt;In a local environment, you can use Chrome’s DevTools:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Open the webpage of interest in your browser, such as &lt;strong&gt;&lt;a href=&quot;https://www.cnn.com/&quot;&gt;cnn.com&lt;/a&gt;&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;Open the Developer Tools in your browser. In Chrome, you can do this by clicking on the top right menu &amp;gt; More Tools &amp;gt; Developer Tools, or by pressing Ctrl+Shift+I.&lt;/li&gt;
  &lt;li&gt;Navigate to the Performance tab in the Developer Tools and click the Reload button to load the webpage.&lt;/li&gt;
  &lt;li&gt;After the page has loaded, locate the Largest Contentful Paint (LCP) option under the timings row and hover over it to view the LCP time.&lt;/li&gt;
  &lt;li&gt;To identify the LCP element, look for the element highlighted in blue in the screenshot shown in the LCP element section of the Developer Tools.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;img src=&quot;/images/lcp-5.png.webp&quot; alt=&quot;&quot; /&gt;
Figure 5: Identify the LCP element on Google Chrome Dev Tools&lt;/p&gt;

&lt;p&gt;Alternatively, you can run a free webpage test on &lt;a href=&quot;https://www.webpagetest.org/&quot;&gt;https://www.webpagetest.org/&lt;/a&gt; where it has advanced tools to track LCP changes.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Select Start a Site Performance test&lt;/li&gt;
  &lt;li&gt;Enter the website url, such as &lt;a href=&quot;http://bbc.com/&quot;&gt;http://bbc.com/&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Select either mobile or desktop, and click on Start Test&lt;/li&gt;
  &lt;li&gt;Once the test report is generated, select the Web Vitals view&lt;/li&gt;
  &lt;li&gt;Go to the Largest Contentful Paint section and select the Filmstrip view&lt;/li&gt;
  &lt;li&gt;Click on Adjust Filmstrip Settings &amp;gt; Filmstrip options &amp;gt; Highlight Largest Contentful Paints&lt;/li&gt;
  &lt;li&gt;You can observe the LCP element as it loads on the filmstrip view&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;img src=&quot;/images/lcp-6.png.webp&quot; alt=&quot;&quot; /&gt;
Figure 6: Identify the LCP element by running a &lt;a href=&quot;http://webpagetest.org/&quot;&gt;webpagetest.org&lt;/a&gt; audit&lt;/p&gt;

&lt;p&gt;UPDATE 10/Apr/2023 : To deep dive into LCP and recent trends from CrUX report refer &lt;a href=&quot;https://almanac.httparchive.org/en/2022/performance#largest-contentful-paint-lcp&quot;&gt;https://almanac.httparchive.org/en/2022/performance#largest-contentful-paint-lcp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy discovering!&lt;/p&gt;
</description>
        <pubDate>Fri, 07 Apr 2023 00:00:00 +0000</pubDate>
        <link>https://murshidm.github.io/web%20performance/core%20web%20vitals/2023/04/07/understanding-lcp-browsers/</link>
        <guid isPermaLink="true">https://murshidm.github.io/web%20performance/core%20web%20vitals/2023/04/07/understanding-lcp-browsers/</guid>
      </item>
    
  </channel>
</rss>
