I wrote some code to simulate creatures in an environment. It didn’t work because it was not well thought-out. I wanted to jot down some thoughts about it to help me do a better design job on version 2.

My creature simulator is a Mac app written in Swift. It uses a neural network to control the creature behavior. I’m starting out with a network of three inputs, 12 intermediate “neurons” in a single intermediate layer, and 4 outputs. The entire network using floating point values for all operations.

My plan was to implement an evolution system to optimize the networks in the creatures. My research yielded two possible methods to optimize the neural networks, one way being a training mechanism where good outputs are rewarded and the other way being a survival-of-the-fittest mechanism. These two approaches are drastically different because the former relies on a training system where someone or something knows what a good output is for a set of inputs and the network can be altered based on that information. The latter has no mechanism to change an existing network and relies on creating offspring of a network that have random mutations. The mutations, if they improve survivability, will cause those mutations to last long enough to get passed on to more offspring.

Optical Character Recognition, or OCR, is often implemented using a neural network that is optimized through training. This can be easy if the trainer (a person using some software) has access to a lot of pictures of characters as well as data to indicate what characters are in those pictures. The network is trained by setting the inputs to the data in a picture while observing the output. If the network outputs the known correct result, the network is strengthened in a way that make the correct output more likely with similar input data. The exact way the network is changed is not relevant here since it is a bit mathy and since I’m not going to do any of this. The key point is that the network can be given a specific set of inputs and then the outputs can be observed immediately to determine correctness. The network is constantly optimized with each result in a way that makes it better at properly detecting all characters from pictures of them.

Creature simulation often, or maybe always, uses evolution to train the neural networks. Note that I say “networks” because each creature will have it’s own network unlike with the OCR example above. For creatures existing in an environment over time, It would be rather complicated to know if a set of input data produces a good output until long after the input data has changed. A creature might turn left when it sees a predator on its left then get eaten while the predator is directly in front of it. It was the left turn output from the predator-on-the-left input that caused the bad result but when the bad result is detected, the inputs have already changed quite a bit. There would need to be some sort of memory system to handle training the network in this way. It would also be hard to train the network if the creature is considered dead and can’t benefit from any improvement. It makes me wonder how human brains work since we might see that our throwing a ball hit the target long after our brain calculated and performed the throw. Since we can’t easily have our creatures learn from their mistakes without a tremendously complicated system that is way beyond my abilities to design, we are left with evolution of their networks as the optimization method. Creatures will have a way to survive, or not, and a network that enhances survival will last long enough to produce offspring with that same network. Offspring will have random mutations and those will improve, degrade, or have no effect on the offsprings ability to survive. This continues generation after generation and eventually, all of the creatures could have networks that have pathways optimized for their survival.

OCR is easy. It can get complicated if very accurate results with very crappy input is desired, but fairly good results from fair input should be easy to achieve with just a network and some training data. Creature evolution, on the other hand, is tricky. Yes, I have a working neural network. And that network is tiny but probably enough to be useful. But I need meaningful inputs and useful outputs. And this is where I’m stuck at the moment. This is where I need to think things through.

For inputs, I want to be able to evolve them too. I’m stuck on this because there is no really useful way to have a sight sensor, one that can always detect other creatures and know their direction, evolve into a small sensor where only the odor of creatures can be detected without direction or distance but maybe strength. Perhaps I should come up with every type of sense that could be used in my environment and then have a sensor for each. Then having a limited amount of resources to spend on using those senses might cause some creatures to evolve a terrific sense of small and no sense of sight. Or would all creatures end up with every sense available but at a mediocre level of ability? I feel like I’m already in over my head and I think that I’ll stick with just “sight” and a single sensor. I’ll give the creatures 360 degree vision for now and see how their behavior evolves with this.

For output, I think that I need to just have the ability to turn. This is where I’m not sure how to utilize a neural network. I could take a single output and, using numbers from 0 to 1, treat 0.5 as meaning “go straight” and use lower values for “left” and higher values for “right.” But this seems a little weird and maybe not how a neural network would work. It might be better to have an output for left and an output for right and then combine them. That way, seeing another creature to mate with on the left would use strong connections to make a left turn. Using weak connections to make a left turn could work but again, it seems a little bit contrary to the idea of strong connections causing strong reactions. I might also want to make the eye work the same way with an input for something being on the left and an input for something being on the right. Interestingly, this could be done with a lot of individual inputs with the one for the direction of a potential mate being set to a high value and the others all being low. My aim is to use the network in a sensible way without really knowing what that is yet. Maybe I could find someone who already did this and copy their work – but that would not be nearly as challenging.

So I’ll use two inputs and two output. They will be for left and right sight and right and left movement. I’ll give the creatures a lifespan and I’ll have them produce offspring whenever they meet another creature. I will perhaps create three offspring, a copy of one parent, a copy of the other, and one that is an average of the the two. I can apply a mutation to the network of all or none just using some random calculations. This is another area where I’m not sure how to proceed and I might end up needing a lot more offspring.

The next step will entail food, predators, and maybe other things in the environment. Then I’ll start working on small, hearing, touch, and other sensors. Speed, acceleration, are all interesting things to explore too. And with food involved, the creatures should have hunger inputs, speed outputs, and things of that sort, as well as having an ability to utilize the food for strength. All of that combined should eventually allow me to create an ecosystem where creatures might evolve to have a balance where they don’t overpopulate, don’t over eat, don’t over procreate, etc. That over procreate issue seems interesting since it should, in a robust ecosystem, cause predators to increase thus killing off the creatures that are spawning too many overpopulating creatures.

But for now, the task at hand is to make creatures that can see others and react in any way their network reacts. They just need to do something and then if they bump into each other, they crate offspring with mutations, averages of their networks, etc.