Conference Summary

Is AI Extending the Mind? A Summary

Cross Labs' Second Innovation Science Workshop focused on projects that explored how physical components interact to perform computations, understanding what intelligence is in terms of computation and cognition, and exploring the ethics of AI applications in the near and slightly-more-than-near future.
By Alyssa Adams
April 25, 2022
Is AI Extending the Mind? An Event Summary

In the second week of April, Cross Labs hosted its second annual workshop “Is AI Extending The Mind?” Workshops are my favorite events to attend because I always enjoy open discussions about unanswered questions with the world’s leading experts. Because Cross Labs focuses on the intersection of AI, intelligent processes, cognition, and open-endedness, I knew the topics presented in the workshop would be exciting. In particular, this workshop focused on projects that explored how physical components interact to perform computations, understanding what intelligence is in terms of computation and cognition, and exploring the ethics of AI applications in the near and slightly-more-than-near future.

I hadn’t attended a virtual workshop before, although I’ve been to several virtual conferences through the pandemic. Typically, because there aren’t spontaneous lunch interactions and evening game groups that you’d see during an in-person conference, virtual conferences can lend themselves to disappointment on the person-to-person interaction side. But at this workshop, I never felt like I was missing out on exciting conversations. The organizers did an excellent job designing the format so attendees could participate, learn, and converse comfortably online.

The schedule was designed to accommodate several different time zones across the globe. There were five different sessions: One per day and each an hour long, followed by an hour of discussion. I appreciated this design a lot because it avoided the “Zoom fatigue” that so often accompanies virtual events. With the accommodating format, schedule, and platform, I had plenty of space to read papers, think deeply, refresh, and go over my notes in-between sessions.

The talks themselves were very exciting. Papers I had seen make their rounds in my news feeds were presented and discussed by the authors. With my Zotero tab opened, I saved citations with notes that quoted the presenters on why ideas and results were significant. Each session was recorded and posted on YouTube, which made it easy to go back and double-check anything I may have missed (in fact, all Cross Labs talks are on YouTube, so you can watch them at any time). And because the workshop had its own Discord server, it was easy to chat and connect before, after, and in-between sessions.

The first day featured the famous Xenobots project from the Levin Lab. I had seen this paper make huge headlines over the last few months and even saw some of my friends outside of science talk about it. It was great to hear about the project firsthand from Josh Bongard and Mike Levin, two of the authors on the paper. My main takeaway is that the physical configuration of biological objects can determine the behavior of the objects in surprising and unexpected ways. In fact, the shape of these frog-embryo-fragments (the xenobots) can even lead to replication. Theoretically, the shape can also lead to assembly behavior, including assembly of logic circuits, as demonstrated by computational models.

This made me wonder about the shapes of other biological – and non-biological – objects as well. For example, a heart only works because it is the correct shape. Any changes to its shape (like thicker artery walls, a lazy value, etc) can lead to the failure of its function. Non-biological objects such as hammers and screwdrivers only work because of their shape as well. In fact, the theme of physical state and function continued throughout the week and was a central theme in each session.

The second and third day explored the explicit relationship between the state of the environment and the state of agents that navigate their environment. For the case of climate change and the survival and success of bees, understanding the relationship between the environment state and the bees as agents is crucial. Alan Dorin demonstrated that machine learning in computational agents can be used as a tool to better understand this process.

For human agents, evidence suggests that the size of our brains has actually been shrinking over the course of our evolution, even though we are “getting smarter” in building new technologies. Tom Froese showed that by having multiple computational agents explore the same landscape, they are able to reach a stable attractor much faster than when in isolation. Because technology and society lends us to solve problem much faster and easier than if we were trying to navigate the world on our own, the demand for a larger brain (performing much more demanding and difficult computations) isn’t as high as it used to be. In other words, if you consider computation as going from some start state to some end state, the presence of having multiple agents allows computation to happen much faster. And since evolution is lazy, perhaps humans don’t need to maintain brain sizes that large if we now live in a world that allows us to solve problems faster via external tools and systems.

But, as Dobromir Dotov points out, this doesn’t necessarily mean that we are getting dumber. Even though technology is allowing us to let go of skills we previously used (I’ll never need to learn how to wash clothes by hand because my washing machine does it for me), it is allowing us to make room to explore completely different problem spaces altogether. The conversation afterwards reminded me of an article I read a few years ago on how we have much less free time than speculated 50-100 years ago. With the invention of all sorts of home appliances, many thought the people of the future would have too much free time on our hands. But instead, we’ve found the opposite to be true. What could this mean for new AI technology moving forward? As new AI architectures such as DALL-E 2 amaze us in their human-like capabilities to perform tasks that only humans could previously do, I can’t help but wonder what the upper limit of human-assisting technology looks like (if there even is one) and what kind of impact it could make on our day-to-day lives.

But the human brain is full of mysteries that we don’t yet fully understand, including modeling them. The fourth day featured a broad exploration of all types of experiential phenomena, including lucid dreaming, psychedelics, and out-of-body experiences. Keisuke Suzuki speculated possible frameworks that could accommodate these types of phenomena in a model. Would it be possible to create a complimentary computational model to the classic framework of classifying these types of phenomena? How can AI such as the Deep Dream Generator be used to understand human experiences?

On the final day, Artemy Kolchinsky explored the idea of semantic information, based on a paper that he and David Wolpert wrote recently. The idea is that information for agents is contextual, depending on the agent and the problem that the agent is trying to solve. I loved the example of a raven, who remembers a bit of food in the forest and flies out to fetch it. If the raven didn’t remember the location of the food and had no memory of the environment at all, then it would be as if all locations are equally likely to contain food. As a result, it would need to forage for the food randomly. But if the raven remembers the food location, the information encoded within the brain (thus setting the brain to a particular physical state) acts as a bit of semantic information. The effect of the information can be measured in the behavior of the raven as it solves the food-fetching problem, and the value of the information is the difference in energy that it takes to fetch the food with the information vs without the information (randomly searching). This was a nice approach to understanding semantic information in biological objects, since many agree that life distinguishes itself from nonlife by the ability to process (use and consume) information to reach different or out-of-equilibrium states.

Since the workshop, I’ve been reading down a rabbit-hole of papers and articles that will end up being a literature review for a future paper. There seems to be a very compelling relationship between the physical state (shape, configuration) of an object, the state of an environment (systems, tools, other agents), and the ability for agents to perform computations. In the fields of artificial life, I often wonder if it is possible to create an agent that not only finds new states (shapes and dynamics), but new types of states altogether. After this workshop, I feel inclined to say that not only does the state of an agent result in behavior that changes the state of the environment, but the state of the environment also changes the state of the agent. This, in turn, may push agents to entirely new shapes, which could push them to explore entirely new behaviors over time. So to answer the question “Is AI extending the mind,” I’d have to agree with those who study embodied cognition and say yes, but the extended mind is also changing us. In any case, I can’t wait for the next Cross Labs workshop!

About The Author

External Research Member at Cross Labs, and Postdoctoral Fellow at Morgridge Institute for Research Madison, Alyssa's focus is on Biology and computation, protein-protein interactions between Influenza-A and humans, Emergence and Open-Ended Evolution, Complex systems, building a diverse community within the sciences, and engaging the public in scientific discussions. Alyssa recently presented 'Emergence: The Force Behind Open-Endedness' for our monthly speaker series, Cross Roads.