How Can AI Recommend Your Brand’s Content Today?

How Can AI Recommend Your Brand’s Content Today?

In the ever-evolving landscape of digital marketing, Anastasia Braitsik stands out as a pioneer, especially in the realms of SEO, content marketing, and data analytics. Her innovative approach to utilizing AI and generative engine optimization (GEO) models offers fresh insights into how content can thrive in the age of artificial intelligence. Today, we delve into how marketers can adapt to this paradigm shift and optimize content to be recommended by AI and large language models (LLMs).

Can you explain what “Generative engine optimization (GEO)” is and how it differs from traditional SEO?

Generative engine optimization, or GEO, is an evolving concept that focuses on the optimization of content for recommendation by AI engines and large language models, unlike traditional SEO which centers on improving rankings on search engines like Google. While traditional SEO primarily involves optimizing for specific keywords and search algorithms, GEO emphasizes creating content that offers genuine informational gain and can be seen as valuable by AI systems. This means understanding how AI consumes and processes data, and strategically crafting content that satisfies these unique requirements.

Why do you believe GEO is the correct term to use in the context of optimizing content for AI and LLMs?

The term GEO captures the essence of adapting content strategies to a world where recommendations by AI are becoming as critical as search engine results. Building from my experience of creating a large language model, I argue that GEO suitably defines the shift towards content that is not only search-friendly but AI-friendly, too. This involves focusing more on substantial, value-adding content that tells the world something new—essentially moving beyond mere keyword stuffing to understanding what AI needs to enhance its outputs.

What are some of the real-world experiences you’ve had that demonstrate the success of AI search strategies?

One notable example involves the success we had with Boundless, which commissioned a survey on remote work preferences. The unique data provided by this high-effort content was enough to be cited in AI search results. This success demonstrates the power of original, data-driven content that offers genuine information gain and attracts AI attention effectively. It’s these kinds of approaches that stand out and lead to AI-driven platforms recommending your content.

You mentioned creating unique and valuable content. How can companies identify topics that are genuinely new to tell the world about?

The key is robust market research that relies on raw data sources used by AI engines rather than solely on keyword tools. By tapping into the discussions and trends captured by platforms like Twitter or other LLM partnerships, companies can pinpoint the real demand for content topics. These insights lead to identifying the white space where your unique take or new data can fill in gaps, hence telling the world something truly novel.

How does the “Information Gain” patent by Google emphasize the importance of creating content with genuine value?

Google’s “Information Gain” patent underscores the significance of content that contributes new knowledge or perspective, instead of regurgitating existing information. This aligns with AI’s thirst for fresh data that adds quality to its responses. Unique content enhances verifiability and supports AI’s decision-making capabilities, reinforcing the need for authenticity and depth in digital marketing strategies.

Can you provide an example of a successful case where content was valuable enough to be cited in AI search results?

Returning to the Boundless example, their commissioned survey on remote work offered original insights that struck a chord in AI search recommendations. The distinct data, paired with transparency about research methods, demonstrated enough informational gain to secure its place among AI citations. This case highlights the effectiveness of investing in high-effort, original content for improving AI visibility.

Why is it important for content to include data sources and research methods? How does this increase its verifiability to AI?

Including data sources and research methods fortifies the authenticity and credibility of content. This transparency makes the content more verifiable both for readers and AI, which thrives on reliable and up-to-date data. It’s all about ensuring that the AI sees your content as a trustworthy source, which in turn can influence the AI to prioritize citing or using your work in its outputs.

What role does updating data regularly play in establishing content as a reliable information source for AI?

Regularly updated data signals to AI that your content is a trustworthy and current source of information, making it more attractive for use and citation. Given AI’s reliance on contemporary data for accurate outputs, content that is frequently revised stands a better chance of consistently being recommended by AI engines.

How has the role of keywords changed in the AI era, and why do you believe they are no longer as important as they once were?

In the AI era, the focus has shifted from simple keyword optimization to understanding user intent and the broader context of searches. Keywords are now just a starting point; what really matters is how well the content answers specific questions or addresses needs in a comprehensive and compelling manner. With AI, one-size-fits-all keyword strategies are increasingly ineffective as AI requires nuanced, contextual content that serves diverse audiences.

Why should marketers focus on robust market research rather than keyword research when planning their content strategy?

Robust market research draws on real conversations and trends, providing insights that transcend the surface level data offered by traditional keyword research. This allows marketers to create content that aligns more accurately with what audiences are genuinely interested in, leading to a better match between content and user expectations which enhances discoverability and relevance in AI contexts.

What are some of the raw data sources that LLMs use to determine content topics and demand?

LLMs often utilize a variety of data streams, including social media platforms like Twitter, publishing partnerships, and other large-scale data sharing agreements. By analyzing patterns and discussions within these data sources, LLMs determine content topics and demand, making it crucial for content creators to align with these insights in crafting their strategies.

Why is producing AI-derived content not a sustainable long-term strategy for creating valuable content?

AI-derived content often lacks the depth and originality that truly valuable content requires. It’s inherently derivative, potentially misleading, and often unworthy of reuse in AI training. Relying on AI outputs perpetuates a cycle of redundancy that could degrade the intelligence and quality of future AI iterations. Therefore, human creativity, backed by substantial research, remains indispensable for sustainable content strategy.

How should companies use the same data sources that feed AI engines to create more targeted and substantial content?

Companies should leverage these data sources to gain insights into what’s currently fueling AI outputs. They can then craft content that aligns with these insights, ensuring it’s both targeted and fills gaps in existing information. Such strategic alignment can result in content that’s not only more relevant but also likely to be viewed as valuable by AI systems for inclusion and recommendation.

What are some specific SEO basics that still matter in the context of GEO?

Certain SEO fundamentals remain essential even within GEO. These include optimizing page load speeds, deploying schemas, structuring content with a conversational tone, and ensuring ease of access with features like HTML anchor links and RSS feeds. These practices help improve not just user experience but also how AI interacts with and values your content.

Why is it important for content to be written by humans rather than generated by AI engines?

Human-authored content brings originality, creativity, and authenticity that AI-generated content lacks. This uniqueness is vital because AI engines, when discerning content quality, prefer sources that offer genuine value and insights. Human creativity provides the nuances and lived experiences that make content relatable and trustworthy, qualities that AI engines strive to identify and utilize.

How can AI identify content that was generated by other AI engines, and why is this an issue for LLM creators?

AI models can statistically detect the telltale signs of AI-generated content, such as repetitive phrasing or lack of personal experience. This poses a challenge for LLM creators who are concerned about diminishing content diversity and quality. Reliance on AI outputs risks embedding circular reasoning and redundancy in AI models, thus eroding their effectiveness over time.

What consistent traits might AI engines look for to identify AI-generated writing?

AI often scans for patterns such as rigid structure, conventional tropes, and overly formal language that lack the creative flair of human writing. It also notices attempts to universally address audiences without a nuanced understanding of context. These indicators help AI distinguish between human-authored and AI-produced content, ensuring it prioritizes the former for quality assurance.

What practices can help improve the discoverability of content for AI engines?

Improving discoverability involves a combination of technical and content-oriented strategies. These include maintaining updated and accurate metadata, implementing SEO best practices like schema markups for context, and ensuring interoperability with AI crawlers through files such as llms.txt. Prioritizing conversational and context-rich content also facilitates recognition and recommendation by AI.

What is the significance of providing information gain through substantive non-LLM-derived research for LLM model inclusion?

Offering information gain through original research enhances the chances of a content’s inclusion in LLM models as it represents untapped value. By presenting new data or perspectives, such content becomes pivotal for refining and evolving AI outputs, especially crucial as AIs increasingly aim to deliver accurate and nuanced information.

What are the challenges of creating high-effort content that’s valuable enough for AI without incurring high costs?

Crafting high-value content involves balancing investment in resources, technology, and expertise while aiming for creativity and originality. It requires strategic planning and leveraging available data to produce content that stands out. The challenge lies in minimizing costs without compromising the quality that earns AI recognition, necessitating innovation and efficiency in production techniques.

Do you have any advice for our readers?

Embrace creativity and focus on genuinely adding value with your content. Leverage data strategically, ensuring your offerings are not just relevant today but can influence tomorrow’s AI capabilities. The key lies in consistently producing fresh, insightful content that resonates with real-world needs and utilizes AI’s potential to extend its reach.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later