Grokipedia: Musk’s AI Encyclopedia Hits 1M+ Entries to Rival Wikipedia

Image Credit: Jacky Lee

Elon Musk’s artificial intelligence firm xAI has launched Grokipedia, an online encyclopedia built almost entirely by its Grok language model and marketed as a “truth-seeking” rival to Wikipedia.

Version 0.1 of the site went live on 27 October 2025 with exactly 885,279 AI-generated articles, according to the homepage counter and a Cornell University scrape of the launch corpus. By 21 November 2025, the site’s ticker reported just over 1,016,241 entries, still far short of the roughly 7.07 million articles on the English-language Wikipedia as of 1 September 2025.

Grokipedia’s pages are drafted, expanded and “fact-checked” by Grok using a blend of web pages, news outlets, books, academic sources and content from Wikipedia itself. Supporters frame it as a bold experiment in machine-curated knowledge. But early independent analyses and news investigations highlight patterns of right-leaning framing, heavy reuse of Wikipedia text and extensive citation of low-credibility and extremist sources, raising questions about reliability and originality.

The project debuts at a time when large language models (LLMs) are increasingly used as gateways to information online, and when Wikipedia itself faces pressure from both AI reuse and political attacks, including Musk’s recent calls to boycott what he has labelled “Woke-pedia” over alleged left-wing bias.

Origins in Musk’s Long-Running Feud With Wikipedia

Grokipedia did not emerge in isolation. Over several years, Musk’s tone toward Wikipedia shifted from public praise to open hostility.

  • In 2021, he marked Wikipedia’s 20th anniversary with warm comments, but by 2022 he was arguing that it was “losing its objectivity.”

  • In late 2024, he urged people to stop donating to Wikipedia, accusing it of left-wing bias and referring to it pejoratively as “Woke-pedia”.

  • He has also joked that he would donate a large sum if Wikipedia changed its name to an insulted variant, reinforcing his view that the site is captured by what he calls a “woke mind virus”.

Tensions escalated after Donald Trump’s second presidential inauguration on 20 January 2025. During celebrations, Musk made a straight-armed gesture from the stage that many observers, including European leaders and Jewish organisations, said resembled a Nazi salute; Musk denied any such intent. The controversy prompted global coverage and led to a dedicated Wikipedia article, “Elon Musk salute controversy”.

When wording noting that the gesture had been widely compared to a Nazi salute was added to Musk’s main Wikipedia biography, he responded with a series of posts on X (formerly Twitter) accusing Wikipedia of ideological bias, calling it “Woke-pedia”, and urging followers to stop financially supporting it.

The idea of an AI-built alternative surfaced publicly in September 2025, when Musk appeared at the All-In podcast conference and discussed how Grok ingests Wikipedia and other open data. Host David Sacks suggested publishing Grok’s internal knowledge as an artifact called “Grokipedia,” explicitly presenting it as a way to counter what they saw as Wikipedia’s systemic bias.

Following that appearance, Musk confirmed that xAI was building a new AI-generated encyclopedia to rival Wikipedia, describing it as necessary to remove “propaganda” from the knowledge ecosystem. xAI, founded in 2023 with the stated goal of building a “maximally truth-seeking” AI, positioned Grokipedia as part of its broader mission to “understand the universe.”

On 6 October 2025, Musk announced that an early version of Grokipedia would arrive later that month. After a brief delay “to address content quality issues”, the site launched on 27 October 2025 as v0.1. It briefly struggled under traffic spikes and rate-limiting but soon stabilised. A follow-up v0.2 release landed on 21 November 2025.

Unlike Wikipedia’s slow expansion from a handful of volunteer-written pages in 2001, Grokipedia debuted with a large, AI-written snapshot of hundreds of thousands of topics, produced in advance and published in one go, with almost no visible record of public editorial discussion.

How Grokipedia Works: AI-Written Entries, Centralised Control

At first glance, Grokipedia’s homepage echoes Wikipedia’s minimalist style: a plain layout, a central search bar and a live counter of total articles, along with a prominent label marking the site as “v0.1” at launch. Articles are largely text-only; early reviews noted a lack of images and a very simple design.

Under the hood, the workflow is heavily automated and tightly controlled:

  • Article generation and updates
    Grok — xAI’s flagship LLM, now on versions Grok 4 and Grok 4.1 — generates and revises entries using real-time web search, news coverage, books, academic databases and social-media content, particularly posts on X.

  • “Fact-checked by Grok” label
    Each page carries a timestamp and a badge such as “Fact checked by Grok [n minutes ago]”, but xAI has not published a detailed description of how that process differs from ordinary LLM inference. External reviewers therefore treat the label as opaque branding rather than a transparent verification pipeline.

  • Edits and governance
    Ordinary users cannot directly edit articles. Logged-in visitors can only submit corrections or suggestions via a pop-up form for “wrong information”, which then feeds into xAI’s internal moderation and tooling pipeline, reportedly including Grok itself as a triage layer. This stands in sharp contrast to Wikipedia’s model of openly editable pages, public edit histories and talk-page debates.

  • Licensing and reuse of Wikipedia

    • Pages that explicitly reuse Wikipedia text carry the original Creative Commons Attribution–ShareAlike 4.0 (CC BY-SA 4.0) licence, with attribution footers pointing back to Wikipedia.

    • Other pages are released under xAI’s proprietary X Community License, which permits reuse and remixing for non-commercial and research purposes, and allows commercial use that complies with xAI’s acceptable-use policy.

    A Cornell University study titled “What did Elon change? A comprehensive analysis of Grokipedia” found that many Grokipedia articles are nearly identical to their Wikipedia counterparts, and concluded that the project is “heavily indebted” to Wikipedia, the very site it aims to outclass.

This AI-first pipeline lets Grokipedia expand and revise entries rapidly, and to create relatively detailed pages on obscure topics where Wikipedia coverage is thin or stub-like. But editorial choices, framing and sourcing are mediated by a single model and a single company, rather than a distributed community governed by explicit policies and visible dispute-resolution mechanisms.

Early Analyses Flag Right-Leaning Framing and Source-Quality Problems

Within days of launch, journalists and researchers began stress-testing Grokipedia’s content. A pattern emerged: for many mainstream topics, the site closely tracks Wikipedia’s structure and text. But on contested social and political issues, it often shifts emphasis in ways that align more closely with right-leaning talking points and Musk’s publicly expressed views.

Framing on slavery, sexuality and gender

A WIRED investigation and follow-up reporting from outlets such as PinkNews, The Atlantic, NBC News and The Guardian identified multiple pages where Grokipedia appears to normalise fringe or debunked positions:

  • An article on HIV/AIDS reportedly treats “HIV does not cause AIDS” as a legitimate scientific position and highlights “HIV/AIDS skepticism,” despite medical consensus to the contrary.

  • One entry claims that pornography worsened the AIDS epidemic, a statement health experts say is not supported by epidemiological evidence.

  • Several pages on transgender topics use terms like “transgenderism”, describe being trans as a choice or “social contagion,” promote the discredited “rapid-onset gender dysphoria” theory and rely on sources that civil-rights groups classify as anti-transgender advocacy, rather than neutral medical or psychological bodies.

Treatment of scientific racism and far-right figures

The Guardian’s 17 November 2025 analysis found Grokipedia pages that:

  • present eugenics and discredited skull-measurement typologies in a sympathetic light,

  • frame the white genocide conspiracy theory as something that is “currently occurring,” and

  • describe Holocaust denier David Irving and other far-right ideologues in strikingly positive terms, emphasising their “archival rigor” and “resistance to institutional suppression” while downplaying their roles in promoting antisemitism and Holocaust denial.

Additional reporting by The Atlantic, Business Standard and Meduza notes that Grokipedia often treats far-right regimes and white-minority enclaves, such as Rhodesia and Orania, in a way that foregrounds economic performance while minimising systemic racism and repression.

Coverage of Musk and allied topics

Time magazine and NBC News highlight that Grokipedia’s entry on Elon Musk is unusually long and largely favourable, emphasising his achievements and ideological views while omitting or downplaying controversies, including the January 2025 salute incident, which is prominently discussed on Wikipedia. Articles on topics Musk frequently comments on, such as gender transition, Tesla, Neuralink and former Twitter CEO Parag Agrawal, similarly tend to mirror his framing.

The Cornell study provides the most systematic evidence of source-quality issues. Key findings include:

  • Across the full corpus, Grokipedia cited sources classified in prior research as “very low credibility” 12,522 times.

  • It used Twitter/X conversations and Grok chatbot exchanges as sources 1,050 times, including at least one instance where a Grok reply to a prompt about “digging up dirt” on a Belgian politician was cited as if it were a factual reference.

  • While Grokipedia and Wikipedia share 57 of their 100 most used domains, around 5.5% of Grokipedia’s sources diverge and include sites on Wikipedia’s own blacklist, such as conspiracy outlet Infowars, white-nationalist platform VDare and neo-Nazi forum Stormfront, cited dozens of times as apparently authoritative sources.

Other experts, including sociologist Taha Yasseri, have compared matched sets of Wikipedia and Grokipedia articles. His analysis suggests that Grokipedia entries are typically longer but less lexically diverse, with fewer citations per word and a tendency to prioritise smooth narrative over dense referencing.

Taken together, the research consensus so far is that, although Grokipedia leans heavily on Wikipedia’s structure, it relaxes or reconfigures Wikipedia’s sourcing norms, making more room for fringe and ideologically charged material.

Where Grokipedia Excels — and Where It Falls Short — Against Wikipedia

A simplified comparison as of late November 2025 shows how Grokipedia sits alongside other reference models. The English-language Wikipedia remains the largest, with 7,067,049+ articles as of 1 September 2025. It is built by human volunteers through an open, consensus-driven process with fully transparent edit histories. Its sourcing model relies on strict rules, including a well-developed list of “perennial sources” and a formal blacklist of outlets deemed unreliable. This approach gives Wikipedia broad coverage, visible dispute mechanisms and mature policies, but it also leaves the project vulnerable to “edit wars”, uneven treatment of niche topics and ongoing criticism that some political articles reflect a liberal or establishment-leaning bias.

Grokipedia, by contrast, is almost entirely AI-generated via xAI’s Grok model, with user suggestions reviewed centrally by xAI rather than applied directly. It launched with 885,279 articles and had reached 1,016,241+ entries by 21 November 2025, making it large but still far smaller than Wikipedia. Its sourcing draws on a mix of news outlets, academic publishers, general web content and social-media posts, and in practice uses looser filters than Wikipedia, including citations to sites that Wikipedia explicitly blacklists. Its main strengths are speed and scale: it can generate long, narrative-style entries quickly and fill in gaps on obscure places, people and institutions. However, it is heavily derivative of Wikipedia’s structure and text, operates under opaque governance, and has a documented pattern of citing low-credibility and extremist sources, alongside a noticeable right-leaning tilt on many contested social and political topics.

Projects like Conservapedia occupy a different niche again. Conservapedia is written by human editors who explicitly adopt a conservative editorial stance and maintain a much smaller corpus of roughly 50,000–60,000 articles. Its sourcing reflects a selective conservative filter and it is used mainly within a narrow audience that shares its ideological outlook. While this clarity of stance may appeal to some readers, it also means Conservapedia has very limited academic or mainstream traction, slow growth and a comparatively narrow topical scope.

Finally, a growing class of AI search tools, such as Perplexity and others, do not maintain a stable encyclopedia corpus at all. Instead, they generate on-the-fly answers using large language models with inline citations, pulling from the live web each time a query is issued. There is no persistent article set or community editing layer; quality depends directly on the retrieved sources at the moment of the query. Their strengths lie in producing concise, query-specific summaries with visible citations, but they lack enduring pages, shared governance and long-term curation, and therefore function more as dynamic research assistants than as encyclopedias in the traditional sense.

Analysts acknowledge several real advantages for Grokipedia:

  • Speed on “long-tail” topics
    AI makes it possible to produce reasonable first-draft coverage for small towns, lesser-known schools and minor historical figures where Wikipedia may only have a stub — or nothing at all.

  • Unified stylistic voice
    Articles often read like a single, flowing essay rather than the patchwork tone shifts that result from many authors editing a Wikipedia page over years.

But these strengths come with substantial trade-offs:

  • Opacity
    Grokipedia provides no public edit history or talk pages, and xAI has not published detailed editorial or sourcing guidelines. Readers see only the current text and a timestamp, with little insight into how or why changes were made.

  • Source risk and ideological tilt
    The documented inclusion of conspiracy sites, neo-Nazi forums and white-nationalist publications as references, sometimes in ways that normalise their perspectives, introduces systematic risk for readers who assume Wikipedia-like sourcing standards.

  • Power centralisation
    Editorial control effectively resides with xAI and Musk, rather than a distributed volunteer community. Critics argue this is particularly troubling on topics that intersect with Musk’s business interests and cultural-war positions.

What Grokipedia Reveals About AI’s Role in Knowledge Infrastructures

Grokipedia has quickly become a test case for broader debates about AI’s role in public-knowledge infrastructure.

On the positive side:

  • It shows how LLMs can help bootstrap encyclopedic coverage on under-documented topics by drafting long-form entries at low marginal cost.

  • The two major comparative studies, from Cornell and from Taha Yasseri, offer a rich dataset for examining how AI repackages human-curated sources (Wikipedia, news, academic work) into synthetic outputs, and where framing, selection and sourcing diverge.

On the negative side:

  • Grokipedia illustrates how relaxed source filters and model-driven summarisation can shift an encyclopedia’s centre of gravity on science, history and identity, even when the prose sounds neutral.

  • By citing sites like Infowars, VDare and Stormfront as normal references, Grokipedia risks blurring the line between documenting extremist ideas and legitimising them, especially for casual readers unfamiliar with those outlets’ reputations.

  • The project intensifies worries about billionaire-controlled information platforms, particularly when paired with ownership of a major social network (X) that can direct traffic to the encyclopedia and amplify its framing.

For Wikipedia, Grokipedia is simultaneously a threat and a validation. It underscores the enduring value of transparent, human-driven processes — edit logs, talk pages, formal sourcing rules — that Grokipedia currently lacks. At the same time, it demonstrates that AI systems trained heavily on Wikipedia can become commercial competitors that repackage its labour without sharing governance or community norms.

Outlook: Iteration, Scrutiny and the Search for Balance

Musk has said that early Grokipedia releases are “just the beginning” and has floated the idea of rebranding it as “Encyclopedia Galactica” once it is “good enough,” even suggesting future snapshots could be transmitted to the Moon, Mars and deep space.

xAI, meanwhile, continues iterating on the underlying Grok models. Grok 4, released in July 2025, introduced native tool use and large-context reasoning, while Grok 4.1, launched in mid-November 2025, focuses on improved emotional intelligence, creative writing and reduced hallucinations, and comes with a specialised Grok 4.1 Fast variant aimed at agentic tool-calling with a 2-million-token context window. If those technical advances are paired with stricter sourcing standards, clearer editorial policies and genuine external oversight, Grokipedia could yet evolve into a hybrid human–AI reference work that complements, rather than merely contests, Wikipedia.

Researchers like Taha Yasseri stress, however, that no AI system can escape its training data or design choices. If the input web contains ideological noise, disinformation and hate speech — as it inevitably does — then any “truth-seeking” system must be judged by how it filters, weighs and contextualises that material, not just by how fluently it writes.

For now, Grokipedia stands as a provocative experiment: a machine-written encyclopedia built on top of the very project it seeks to surpass, reminding readers that, even in the age of AI, truth on the internet is negotiated, not downloaded.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Google AI Mode Expands to 180+ Countries: Search Turns Into a Chatbot

Next
Next

xAI Launches Grok 4.1: New "Thinking" Mode Hits 1483 Elo Score