How to build an artificially intelligent plant
The amazing wondering animal Ani Liu just asked me how I would start to build an artificially intelligent plant. That is a beautiful question! Here is what came up:
1. To begin, identify an area where a plant would meaningfully benefit from additional cognition.
Here is an example: Plants probably cannot form maps of the environment, but only know gradients. That makes sense, because plants can only move along a gradient: I need more light, in this direction, it gets lighter —> grow in this direction. What about mapping the room with a camera and sensors, and measure where it would be good for the plant to be (light, temperature, commotion, air currents)? And then move there? What about finding out how the room changes during the day and identify an ideal trajectory in that room that you follow as a nomadic plant? What about measuring the need for water and nutrients, and actively seeking out a fountain and a shower with plant food when it gets hungry or thirsty?
2. Go one or two levels up in the hierarchy of decision making systems.
Plants have autonomous regulation, which is perhaps similar to reflex arcs and the global regulation in the brain stem. They might also have something like pleasure and pain, i.e. reinforcement signals that tell them if their regulation is ok or should be adjusted. It is less likely that they have impulses, i.e. action tendencies in response to needs, because they don’t act a lot, and they won’t have episodic memory to align their actions with previously observed patterns in the environment. Before we make the plant fully intelligent by giving it a full simulation of the world and itself, and self-reflective capabilities and communicative intentions, we could begin by giving it impulses and learning pattern matching. These regulations need to kick in whenever the autonomous regulation fails. I.e. we install processes that closely monitor how well the plant is doing and when it leaves certain parameters, how well it is doing in moving back into the “good space”, and when that fails, what actions are best to ensure that. For this, we need sensors in the plant itself that measure its well-being, and its stress. The former tells us how suitable it is regulated, the latter tells us how much it is currently struggling to regulate. (Stress indicates that resources are spent on the regulation itself.)
3. Make the plant goal directed.
A goal is the explicit representation of a state that will make the plant better. An aversive goal is an explicit representation of a state that will make the plant worse, and should be avoided. A state is a collection of features, which describe environmental and internal conditions that have a correlation with the needs of the plant. Decision making identifies goals, and picks those that it wants to make real, based on ways and probabilities and costs to get there. Plants cannot do explicit representations, but cognitive systems can. A plant may decide to take in more water, if it needs water and water is currently sloshing over its leaves or through its roots, or it might decide to switch modes and stop doing that. But that won’t be its goal: it is just an immediate response to a pattern of stimuli. A goal would allow it to make a plan, for instance to visit a fountain at a time when it is likely to need water, and to avoid doing that. Goals can be learned with operant conditioning once we have a cognitive representation of needs through urges, and
4. Align the incentives of the cognitive system with the incentives of the plant.
The cognitive system needs to do well when the plant does well, and vice versa. Doing well here means that the reward signals in the cognitive system need to approximate the rewards/utility of the plant organism.
5. Supplement the plant with non-plant goals and non-plant cognition, but do it to serve the plant needs.
Once a plant becomes nomadic, it might need to coordinate its actions with other inhabitants of the same space. Perhaps humans would prefer to not sit in the shade of the plant, or to actually sit in the shade of the plant. Perhaps it should not insert itself into the line of sight between people having a conversation. Perhaps it needs to coordinate its feeding times with other plants. Perhaps it needs to ask for help if the food shower is empty or cannot be found, or if it believes that it got sick. Perhaps it can secure rewards by offering limited service to human needs (such as aesthetic needs, amusement etc.), like pets do. Plants may reward humans whenever the plant feels well, by providing beautiful illumination in corners of the office at night, or by expressing its well-being by deliberately aligning itself with the aesthetics of the room, by (sparsely!) playing soothing sounds, and conversely, they can express stress and displeasure when they become cranky due to leaving their ideal parameters, through erratic movements, flashing lights or groveling noises. This might mean that there is going to be an autonomous system that parses language, expresses the state of the plant to others, performs basic social cognition etc., but the incentives of that system are still going to be tied into the regulation and well-being measurements originating in the plant itself.