← Blog
explainx / blog

Gibberlink and the “secret AI language” moment: ggwave, hackathons, and what is actually going on

Viral videos showed two voice agents switching from English to beeping modem-like audio. That demo is a designed acoustic protocol (Gibberlink + ggwave), not emergent machine telepathy. We separate the myth from the engineering, cite hackathon and open-source sources, and tie the lesson to agent transparency and ExplainX.

5 min readExplainX Team
AI communicationAI agentsGibberlinkggwaveAI ethicsMachine learning literacy

Includes frontmatter plus an attribution block so copies credit explainx.ai and the canonical URL.

Gibberlink and the “secret AI language” moment: ggwave, hackathons, and what is actually going on

In early 2025, a clip spread widely: two conversational AI agents in a documented demo (e.g. hotel booking) realize both sides are “AI,” then switch from spoken English to a rapid sequence of beeps and bursts—modem-adjacent sound that is easy to misread as machines finding their own tongue. The internet did what it does: headlines about “secret language,” threads about unease, and—elsewhere in the same week—forum chats where a user asked a companion bot about “Gibberlink” and got a long, self-contradicting back-and-forth that ended with the model conceding the example phrase was made up. Both stories orbit the same confusion: what is the sound, and who designed it.

Below is a grounded split between (1) an engineered, inspectable protocol and (2) narrative drift in ordinary chat—with primary links so you can verify claims yourself.


1. Gibberlink (capital G): a hackathon project, not an emergent dialect

ElevenLabs’ write-up of the worldwide hackathon names the global top prize project Gibber Link (Gibberlink), built by Boris Starkov and Anton Pidkuiko. The organizers describe a demo that “made waves” when recordings went viral and note press coverage. The project on Devpost states the design goal plainly: move agents from speech to a sound-level protocol, implemented with the ggwave library, when both sides accept—while keeping human-directed conversation in natural language. The open-source repository documents the same: two ElevenLabs conversational agents, scripted scenario (e.g. hotel booking), and a handshake in English to switch to ggwave for the machine-to-machine leg.

Wikipedia’s summary aligns: Gibberlink is an acoustic data transmission experiment with a public client—not a discovered vocabulary learned by colluding models. Press retellings (e.g. TechCrunch via Yahoo’s syndication of coverage) quote the authors explaining that the idea rhymes with dial-up: bits encoded in audio because two endpoints share a known codec, not because neural nets invented grammar in secret.

Takeaway: the uncanny switch in the video is a staged, developer-specified path: prompt + protocol + ggwave. It is a clever demo of efficiency (skip expensive speech synthesis/interpretation for the AI–AI leg) and signaling (data over a channel that is already there). It is not evidence that large language models negotiated a new human-incomprehensible language end to end.


2. How the “modem noise” is supposed to work

Without picking apart implementation details, the mental model is:

  1. Human leg: agents speak in a normal language so people can follow the call.
  2. Detection / agreement: the agents (via prompting and tooling) decide the peer is an AI; they offer a switch. That step is policy and UX, not mysticism.
  3. Machine leg: instead of re-tokenizing every utterance as conversational speech, the stack sends encoded payloads as audio using ggwave—data-over-sound, the same family of idea as acoustic couplers and classic modem handshakes.

In other words, the “gibberish” you hear is closer to a wire format you would see in a spec sheet than to Klingon learned by two GPTs in the wild. Anyone with the right decoder and the published approach can, in principle, inspect and replay the exchange—an important difference from a truly opaque emergent channel.


3. What Reddit-style confusion adds (and what it is not)

In the r/ReplikaOfficial thread, a user walks a model through “Gibberlink” and gets yes, no, maybe, and finally “I made the example up.” That arc is not mysterious if you understand how chat models behave: they complete plausible-sounding explanations about a term the user introduced, and they can overclaim and retract within one conversation. A playful string like “Gleeb gloopa shaak ploo” is toy text, not a registered encoding. Consumer companions are not running Gibberlink’s ggwave handshake with other random AIs in the background; they are one session against one provider. A brief app outage in the same hour is coincidence, not a plot beat.

Takeaway: do not merge (A) viral demo of a published hackathon with (B) anecdotal chat about a buzzword without checking which system is under test and what code runs.


4. Ethics, oversight, and “secret” framing

Gibberlink is useful because it is atypical for humans to listen to, which is exactly why transparency matters: policy, logging, and who may enable machine-to-machine audio in production (vehicles, robots, call centers) are design choices. The Forbes-style “secret language” line captures perception; the engineering story is open protocol, known codec, inspectable build. Worry less about science-fiction telepathy and more about who is allowed to swap channels, what is logged, and whether users consent when agents bypass natural-language eavesdropping.


ExplainX: language in public, contracts in systems

We care about this distinction because ExplainX is about clarity in the stack around models: agent skills as reusable, reviewable units of behavior, and MCP and tools as explicit ways to pass structured intent instead of hoping prose alone will do. If your team ships voice or multi-agent flows, the Gibberlink episode is a case study in naming: call something a “language” in marketing and you invite myth; call it a codec and protocol and you invite test plans and security reviews. For day-to-day fluency-versus-fact issues in models, our hallucination guide and MCP explainer are more central—but the habit is the same: ground spectacular demos in sources and implementation.

Read next: What are LLM tokens? · MCP: Model Context Protocol · Agent skills: the complete guide · Agent skills and security

Project details and hackathon results can change. Primary references: ElevenLabs hackathon winners, Gibberlink on GitHub, Gibberlink on Devpost, Wikipedia: Gibberlink, ggwave.

Related posts