Writing for humans? Perhaps in future we'll write specifically for AI – and be paid for it

'There needs to be a better economic as well as copyright framework', Thomson Reuters CPO tells us

Interview Thomson Reuters, based in Canada, recently scored a partial summary judgment against Ross Intelligence, after a US court ruled the AI outfit's use of the newswire giant's copyrighted Westlaw content didn't qualify as fair use.

The ruling isn't precedent-setting - it didn't settle whether training AI on publicly accessible or proprietary content without permission counts as fair use or copyright infringement. That issue will come up again and again in the 40 or so US copyright cases against AI companies. Odds are it won't be decided in the US until it reaches the Supreme Court or gets decided by government fiat, as OpenAI has requested.

It was a small victory and isn't likely to derail the AI hype train that Thomson Reuters, in fact, has a ticket to ride. As we noted last June, the biz in 2023 acquired CaseText, which makes an AI assistant for legal professionals called CoCounsel.

We spoke with David Wong, chief product officer for Thomson Reuters, to better understand how the publishing and information giant expects to negotiate with AI firms and integrate generative AI into its offerings.

Wong leads the information giant's product, design, and its content and editorial teams – including the authors responsible for its legal and tax content. That material powers Thomson Reuters' research platforms and software products, which generate about 90 percent of the biz's revenue through subscriptions and recurring services.

The Register: How should AI companies be dealing with publishers and with copyright concerns? Based on the Ross Intelligence decision and Meta's recent failure to have an authors' lawsuit dismissed, it appears that the courts won't give AI firms an automatic pass on infringement allegations.

Wong: We recently were awarded summary judgment as part of the Ross Intelligence litigation, where Ross had obtained access to a reasonable chunk of our Westlaw copyrighted content and used it to create an AI system.

They didn't use it to create a generative AI system, but they used it to create an AI system.

We challenged their use. We didn't believe that it was fair use because they had broken our terms and conditions to be able to get access to that content. And they used it to be able to produce a new piece of intellectual property, which was borne off the back of that content. And so you're correct that the courts awarded us a favorable judgment there, where they deemed that that specific use was not fair use.

[The court] didn't specify it as being precedent setting. It wasn't intended to cover generative AI use cases and things like that. But I think it is an indication of direction. And it certainly is consistent with Thomson Reuters' perspective, which is that there are many forms of fair use for copyright. But specifically for the training of an AI system, which is encapsulating and transforming the content into a system which can use, can reason, can replicate, or can recite that content, that is not a form of fair use.

This would be in contrast to, for example, like making a parody or using AI to be able to transform a piece of written content or something like that. But the training of a model, I think, is a specific situation where we strongly believe that that is not fair use and that there needs to be a better economic as well as copyright framework for AI companies and for content producers to innovate together.

And one of the reasons why we believe so strongly in this is because we are in fact building AI systems.

Thomson Reuters, in the past two to three years, we have been investing incredibly heavily. In fact, our CEO has said to the marketplace that we're investing more than $200 million a year to develop AI features across our products so that our customers, lawyers, accountants, tech professionals can get, for example, a domain-specific AI system which has the knowledge of Westlaw and has the knowledge of practical law and can use those assets to help them do work. And so it's in our interest to have an approach where we can create that type of, we think, industry changing technology.

It just needs to be [a fair system] that compensates those who produce the content [to make these AI systems work].

The Register: So what's a reasonable approach to accomplish this? Should companies negotiate content deals on a case-by-case basis or should there be a common framework for content licensing?

Wong: I would like to see more collaboration between the different AI companies and content producers. I think first is that there needs to be industry discussion. As you can see with OpenAI and Google, we're trying to use policy mechanisms, which I don't think are as helpful as actually having the conversation.

At the fundamental level, there needs to be a business model and a method for payment between those who own copyrighted works and those who wish to use it downstream. That's no different than any other mechanism today.

As an example, we know that educating people is for the public good, but students still have to pay for their textbooks. The authors don't write those textbooks out of the goodness of their hearts.

There is obviously a public good to it, but there is a mechanism to make sure that books are sold and royalties are collected. There's a productive mechanism there. And then that creates a favorable market for producers to create that content [that people then use].

There will be a much richer ecosystem of creators who will want to publish and sometimes maybe even publish solely for the sake of AI

I think there's a very similar parallel here where if we go to a complete fair use situation, the unintended consequence is that walls will be put up and access will clamp down. Because if putting your content out on the internet means that an AI company is going to come and take all of it and to produce an AI system that could then supplant or replace the value of that content, then there's not going to be a lot of incentive to publish and it'll be harder to access.

If there's a mechanism where you can be properly and fairly compensated, then it'll be the opposite, where there will be more production. There will be a much richer ecosystem of creators who will want to publish and sometimes maybe even publish solely for the sake of AI.

What we've seen in some of our development is that there are definitely certain types of content that can be written or produced, which are uniquely suited to AI. Right now, a lot of the content that's published is intended for human use – people are writing books and they're writing articles or writing things which are intended for human consumption.

We've actually found in some of our experimentation is that if you really optimize for trying to improve an AI system, you write different things. And that type of content, there isn't an incentive for people to create unless they are compensated. That's what OpenAI is doing when they hire people to be able to do training and to be able to do reinforcement learning for their AI models.

The Register: Do these issues differ based on the type of media we're talking about? Are the copyright and compensation concerns the same when you're talking about say text-to-image generation versus text-to-text generation or creating videos from text or speech synthesis or music?

Wong: I think the economic argument applies pretty consistently. You want to make sure that you have an incentive for people to produce content and to create creative works.

That being said, I do think that the professional use case that we work in at Thomson Reuters with lawyers and with accountants is a pretty good example. Contrary to what OpenAI said – where they've created a model saying we've trained these models not to regurgitate or … to replicate the content they're trained on verbatim – in cases where you have professional work, you actually want the systems to be able to cite their sources and to be able to produce an explanation of how it reached its conclusion by referencing those sources. You don't want a paraphrased version. You actually do want a citation. You want to be able to say, "Ah, this legal argument or this argument about how to treat your taxes is based off of this law, this analysis, this opinion, or this case that has been argued."

The fact that our professional use cases generally require citation, I think, really points to a connection between the input data and the output of an AI system. [That connection] makes the imperative for fair compensation, as well as the involvement of the copyrighted owner, very clear for those use cases.

The Register: Do you think it's desirable to have AI-enabled content generation? Is it going to improve the situation for everyone or is it going to depress wages, if say, you can write stock reports using AI rather than professional journalists? Is that ultimately a benefit or is that a problem?

Wong: We're seeing early examples where we are applying AI ourselves within our teams to produce and write content.

In the short to medium term, most of the uses for AI are eliminating rote work and drudgery from most of the creative work. You said a stock quote or [earnings] announcement. That's generally sort of pretty low in terms of creativity and excitement for people. It's the sort of thing that must be done to be able to provide some of those services. So at least in the short term, I think that it's generally about removing drudgery, helping teams and individuals to become more efficient in the work that they're doing.

Because you're more efficient, you need less people in aggregate, but you're not going to see a full scale replacement of any of those roles

In the long term, it's a little bit harder to say. In the legal industry, there's a similar question of, "Oh, is AI going to replace lawyers?" And is there a suggestion that the representation and creativity and problem solving that you get from a senior lawyer is not going to be needed? I think most of our customers are starting to realize that it's much more about augmentation. The creativity and the problem solving that you have from a person is augmented by these tools and will help you get to the outcomes you want faster and more efficiently. That might mean that because you're more efficient, you need less people in aggregate, but you're not going to see a full scale replacement of any of those roles.

On purely creative ventures, I think it's going to be even clearer that AI will be used as a tool to help people to express their creativity. It's unlikely that you're going to see that replaced. I don't think that that's something that in the long term that I would be concerned about.

The Register: In terms of dealing with customers who are looking at AI, have there been any noteworthy discoveries or surprises as you've rolled these products out to them?

Wong: There are many of the things that other businesses have written about or have reported on that have shown up within our industries. These are not surprising, I don't think, but the change management to be able to actually use the tools has been a whole lot more than anyone expected.

An AI tool can do an initial draft of something or can help you with research, how do you actually build that into your workflow?

While it's been easy to run experiments, actually adopting the products into a workflow or to a way that business is done or that client work is done has been an interesting challenge. Now that you've proven, for example, that an AI tool can do an initial draft of something or can help you with research, how do you actually build that into your workflow? How do you actually build that into the way you see your clients? That's been the big focus, certainly a focus for 2025 as customers are adopting.

I also think that there has been a surprise in the role that a lot of these systems have played in helping to support problem solving, that many of our customers who are power users have realized, "Oh, this is not just about delegation of work." It can also be about having an agent or an assistant to be able to challenge and test ideas. Once you have some of the backup and the content behind it, such as Westlaw or some of our tax content like Checkpoint, you then have a relatively well-informed sparring partner. That's also been a happy surprise.

The Register: Is there anything else we haven't touched on that you'd want to mention?

Wong: We're not trying to preserve a world of libraries and books and card catalogs here, not at all. In fact, the goals that OpenAI and Thomson Reuters have, and all the AI companies, are actually quite consistent. We want to create industry-changing AI systems. From our perspective, we've seen that these AI systems require great models and content and also experts who are helping to train and refine these systems.

We just want to create an ecosystem in the future, which has everybody incentivized to do the right thing. I think that it's very important that the incentives are aligned so that writers are incented to write, that people training and managing those systems are incented to do so, and that model creators are incented to create better models … If we think that superintelligence will have the abilities of a human, then that human should be able to pay for its education. ®

More about

TIP US OFF

Send us news


Other stories you might like