Canada’s meeting place for freelance writers and creators

Established 2010

Why the next phase of artificial intelligence will be shaped by contracts, consent, and credibility

Artificial intelligence didn’t arrive quietly in media work. It arrived all at once—through writing tools, image generators, search engines, newsroom experiments, and platform deals that most creators had no say in. Three years in, the question is no longer whether AI will affect media workers, but how much control they will have over the terms of that impact.

That tension sat at the centre of a recent Canadian Freelance Guild panel on AI and media labour. Rather than debating whether AI is “good” or “bad,” the discussion focused on where the real pressure points are emerging: consent, disclosure, ownership, and the widening gap between how AI is marketed and how it actually performs in creative work.

AI isn’t replacing creativity. It’s being misused as if it could

Despite the hype, large language models remain probabilistic systems. They predict plausible next words. They don’t understand context, intent, or consequence. Many of the most visible AI failures in media stem from treating these systems as creative stand-ins rather than as tools with very specific limits.

That distinction came up repeatedly in the discussion. John Schleuss, president of the NewsGuild–CWA and a former data journalist at the Los Angeles Times, emphasized that the real value of AI lies in how it supports—not substitutes—human work. As he put it, “You need to be able to use these tools in ways that don’t replace your creativity, but really help you get to that creative task without all of the monotonous stuff that we have to deal with anyway.”

Used well, AI can help with searching, sorting, and processing large volumes of material—what one panelist described as work that removes the “haystack” rather than replacing the needle. Used poorly, it introduces risk, error, and a false sense of authority.

Consent is the real fault line

If creativity is difficult to replace, ownership is far more vulnerable.

A significant portion of the panel focused on how media companies are training AI systems on existing journalism, photography, and creative work—often without meaningful consent from the people who produced it. Freelancers are particularly exposed. Work-for-hire contracts frequently strip creators of downstream rights, even when their work is later used to train systems that compete with them.

Moderator George Butters captured the problem bluntly when reflecting on how common scraping and reuse have become: “I didn’t necessarily have any kind of consent covenant or anything to say I was doing that, because that was just normal practice.”

What was once routine now has much higher stakes. Without explicit opt-in or opt-out clauses, creators often have no practical way to control how their work is reused once it enters large data systems.

The panel’s conclusion here was clear: without collective pressure—through associations, unions, or shared standards—consent is unlikely to be granted voluntarily.

Disclosure isn’t about distrust. It’s about credibility

Another major theme was disclosure—specifically, when and how AI use should be communicated to audiences.

In journalism and media, trust is the underlying currency. Readers assume that when a human signs their name to work, that human has verified and stands behind what’s being published. Undisclosed AI use breaks that chain of accountability.

Legal scholar Daniel Escott framed disclosure not as a restriction, but as a safeguard: “The disclosure is not because we don’t trust you. The disclosure is because we do.”

As Escott noted, courts already require disclosure when AI is used in submitted materials—not because AI is prohibited, but because transparency allows recipients to assess credibility appropriately. Media work is heading in the same direction.

Ironically, disclosure may become a competitive advantage. As audiences grow more skeptical of low-quality, AI-generated content, transparency can signal care, judgment, and responsibility.

The law is behind the technology—and catching up will be messy

Copyright frameworks were never designed for systems that can ingest entire archives, remix them statistically, and produce outputs that are neither direct copies nor clearly original. As a result, questions around training, ownership, and impersonation are largely being resolved through litigation rather than legislation.

As Escott warned, “The world of copyright law is weird, wacky, wonderful, and half imaginary. When you add AI, it’s just like ten other levels of confounding issues.”

For media workers, this uncertainty has real consequences. Content generated primarily by AI may not be copyrightable at all. And if ownership can’t be clearly established, neither can compensation.

What media workers should actually do next

Rather than offering a checklist, the panel emphasized a set of priorities that freelancers and media workers should be carrying forward:

  • Use AI where it genuinely saves time without eroding authorship
  • Treat AI as infrastructure, not a creative substitute
  • Pay close attention to contracts, consent clauses, and data ownership
  • Assume accountability doesn’t disappear just because a tool is involved

The most sobering insight was also the most grounding: AI doesn’t remove responsibility—it redistributes it.

As Schleuss put it when describing the current landscape, “Right now, it’s the wild west.” For freelancers especially, that makes collective standards, shared knowledge, and clear boundaries more important than ever.

The next phase of AI won’t be defined by better prompts or newer tools. It will be defined by whether media workers can shape the rules under which those tools are used.

That work is slower and less glamorous than the hype cycle—but it’s where the future of media labour is actually being decided.