How we do (and don't) use AI

How we do (and don't) use AI

Hi folks,

After the launch of the ObviousMimic.ai beta yesterday, we want to ensure our community understands how we do—and don’t—use AI in our products.

Quick summary:

  • We don’t use AI to write the content in our books.
  • We don't use AI to create the art for our books
  • We use AI for Chester, our (mostly) friendly Assistant DM / Mimic.
  • We use AI to generate the audio you hear on obviousmimic.ai
  • We’re upfront about how and where we use AI in our company.
  • We respect privacy and copyright. We make sure our AI use aligns with ethical data practices.
  • We see AI as a helper, not a replacement.

If you’re curious about how things work behind the scenes, let’s dive into more detail.

What is Chester? And What is an LLM?

Chester is an LLM—but what exactly does that mean?

Large Language Model (LLM) is a type of AI designed to understand and generate text. Think of it like autocomplete on your phone—but far more advanced. LLMs don’t actually “understand” things the way humans do. Instead, they use probabilities to generate responses based on patterns from their training data.

They are essentially prediction engines—and in our case, we are using openly available base models as the foundation for Chester.

Do We Train Our Own AI Models?

No, we don’t train our own base models. Doing so would be incredibly expensive, both financially and environmentally due to the immense energy required.

Instead, we rely on pre-trained models from industry leaders like LLaMA, BERT, and GEMMA (we’re still testing different options). These base models have already been trained on vast amounts of data, and we use them as a starting point to generate Chester’s responses.

This approach is more sustainable and helps us keep costs lower while maintaining transparency in how we use AI.

Where Does AI Training Data Come From?

These models are trained on publicly available data, learning patterns in language to predict what words should come next in a sentence. Importantly, this data is gathered using ethical standards, meaning:

  • It avoids copyrighted text.
  • It follows fair-use principles for text data collection.
  • It does not steal from writers, poets, or content creators

We’re creators ourselves, so we care deeply about ethical AI. If we discover that a model has been trained on illegally obtained data, we will switch to a different model.

Many base models, like LLaMA, use Common Crawl to gather public web data. These models then process this data to create meaningful connections between words, allowing them to generate intelligent responses.

 

Can AI Plagiarize?

Not really. AI models don’t copy-paste information—they generate text based on learned patterns. True plagiarism would require the AI to memorize and directly copy large chunks of text, which it doesn’t do.

In extremely rare cases—such as the monkeys-on-typewriters scenario—AI could produce something very similar to an existing work. However, modern LLMs are designed to reduce the risk of this happening.

How Do We Use These Models?

We use LLMs through inference which is the process of generating responses in real-time based on a given input. When a user asks Chester a question, the AI doesn’t “think” like a human—it:

1. Processes the input

2. Finds patterns in its trained data

3. Generates a response based on probability

Every time you interact with Chester—whether asking a question, getting a rule clarification, or playing a solo adventure—you’re experiencing inference in action.

Training = When an AI model learns patterns from massive datasets.

Inference = When the trained model applies those learned patterns to generate responses in real time.

How Do We Fine-Tune Chester?

So, how does Chester know how to be an Assistant DM? We fine-tune him using carefully selected data and structured prompts. This means we teach Chester:

Who he is (his in-game persona)

How he should talk (friendly, game-oriented, helpful)

What answers he should give (rules-based, creative, or contextual)

What he shouldn’t do (stray off-topic, generate unsafe content)

By doing this, we ensure Chester provides engaging, game-relevant, and reliable responses.

What About AI’s Environmental Impact?

One major concern with AI is its carbon footprint. Good news: Inference is far less energy-intensive than training. While training a new LLM requires massive computational resources, inference(using a pre-trained model) is much more efficient.

By not training our own base models, we are reducing our environmental impact while still benefiting from AI advancements.

Why We Believe in Open-Source AI

We strongly support open-source AI because it:

  • Encourages transparency—everyone can see how models are built.
  • Promotes innovation—developers worldwide can contribute.
  • Reduces dependence on big tech companies for AI access.
  • Benefits society—free and open AI access helps researchers, students, and businesses.

We believe open-source AI will be a major force for human progress, and we’re proud to use models that align with these values.

Final Thoughts

AI is a powerful tool, but it’s not a replacement for human creativity. We use AI responsibly—to assist, not replace—ensuring that storytelling, art, and adventure remain. If you want to learn more about AI, here is a good start.

Thanks for being part of this journey with us!

Back to blog
1 of 3

Featured collection