Opinion: How do AI systems “learn” from the intellectual property of others?


By Michael Morris

Many news stories in 2023 referred to Artificial Intelligence (AI) and systems such as ChatGPT learning from information taken from various sources. Authors, artists, and actors all claim that their intellectual property is being used by AI systems to “learn” and, furthermore, to generate new output that is often indistinguishable from human-generated sources.

How is existing intellectual property used for AI learning? What makes the learning and generative processes so computationally and data-intensive? Do the creators of newspaper articles, music, art and other creative output have a legitimate claim to remuneration by these AI system operators?

First, a bit of background. 

Research in artificial intelligence has been pursued formally since the advent of the digital computer. Over the decades, the theories researchers used to help understand and mimic human intelligence have evolved as successive models succeeded or failed to allow computers to perform in a human-like fashion. In the seventies, promising research theorized that humans created mental scripts, plans and goals to apply to all situations they encountered and that our brains expanded those to novel situations. Researchers attempted to create rudimentary scripts and plans in computer code to allow a computer program to behave anthropomorphically. These approaches reached their limits quickly due to the difficulty in coding a nearly infinite amount of scripts and goals and the slow speeds at which the computer could match a current situation to an existing script and make the appropriate plans.  

As computing power became cheaper and readily available, a more ‘brute force’ approach started to show promise: the neural network. At its simplest, a neural network is a multi-dimensional data structure that is fed with data gleaned from real-world “things.” These “things” range from photographs, music, video, writings and beyond. 

These “things” get digitized, categorized and fed into the network’s matrix. Grabbing items from the internet makes the work easy in that they are already digitized and can be inserted in large volumes into the neutral network. Imagine collecting hundreds of digital images of the concept “tree” and feeding them into the neural network with the label “tree.” Eventually, the matrix will be filled with many examples of “trees.” Mathematical formulas will arrive at an average “tree” that the neural network can then use in the future to determine whether new digital inputs fit the “tree” category. The neural network can then be said to “understand” the concept of a tree and use it in a generative way (see Brain-State-In-A-Box model: A simple nonlinear auto-associative neural network by James A. Anderson). Let’s see what the data for a simple tree might look like and how it would be represented in a simple 2-dimensional matrix in our neural network:

Concept: tree

Neural Network Pattern: [1,1,-1,1,1;1,-1.-1,-1,1;1,-1.-1,-1,1;-1,-1,-1,-1,-1;1,1,-1,1,1]

Consider a 10 million by 10 million matrix, and the different trees the network can learn about become realistic and help the system extrapolate other possible tree-like things. This simple example can be expanded to infinite other categories of things, e.g. “’music by Mozart,’ ‘and geographic characteristics of the Mississippi delta,’” in endless variety. The neural network’s matrix can have more than two dimensions; indeed, as many as are needed to completely capture the unique characteristics of the digitized data being fed into it.

Given the large amount of digitized data a neural network requires to get a realistic representation of everyday concepts, it should come as no surprise that the computing and energy requirements of modern AI systems is huge. Back in the 1980s, using PDP-11 computers, it would require 100s of hours to train a system on one basic concept. It comes as no surprise that the people responsible for creating the examples of things that AI systems build their concepts upon are claiming theft of intellectual property. Enormous amounts of human-created information are fed into the neural network to help it build concepts and, from that, the ability to create new derivatives. But don’t humans do likewise? Doesn’t a novice artist view a lot of art to create concepts such as “Modern Art” or “Neoclassicism” and then build upon those concepts when creating something new? It’s unclear to me that neural networks are taking without remuneration any more than a human does when they view art, read poetry or listen to music and then rely upon that foundation to create something new.

Michael Morris is a Laguna Beach resident homeowner. He served as Laguna Beach’s trustee to the Orange County Mosquito and Vector Control District and served on the Orange County Grand Jury. Mr. Morris received his graduate degree in Cognitive Science from Brown University. He did AI work at Rockwell Int’l and joined several early AI Companies in the 1980s, including Intellicorp and AICorp.

Share this:


  1. There’s a slight correction in the neural network representation of the simple tree from our example (the encoding has “off” cells as a 1 and “on” cells as a -1):
    Neural Network Pattern: [1,1,-1,1,1;1,-1,-1,-1,1;1,-1,-1,-1,1;-1,-1,-1,-1,-1;1,1,-1,1,1]

    1) Schank, Roger and Abelson, Robert. “Scripts, Plans, Goals and Understanding: An Inquiry into Human Knowledge Structures”, Psychology Press, 1977.
    2) Anderson, James A. “An Introduction to Neural Networks”, The MIT Press, 1995.

    Mike Morris
    [email protected]


Please enter your comment!
Please enter your name here