In this video we will take a look at the semi finished product that was result of my R&D, we will see what we would get at the end of the chapter. I talk a little about the history of behavior trees and why they are a better solution for AI compared to state machines.
When I was starting to record the second chapter, I thought about my solution for DLL project generation that we had in first chapter. I noticed that it was redundant because visual studio lets you create your own Project/Item templates. So, in this video I start generating those templates as a .vsix extension that can be installed in any local machine on visual studio.
At the end of previous video, we saw that the icon.ico would copy itself into the generated projects using our template, in this video we fix that. Also, I fix the icons not showing up in the extension list.
If you ever tried generating your DLL files in a separate visual studio project and import the DLL file in Unity, you would find that in case you have errors or exceptions in code, the error message in console won't show the line numbers, also, you won't be able to attach your solution to unity process and debug test it. The reason? visual studio (in debug build mode) will generate .pdb files and Unity has no understanding of this file format. If you look around the forums, you see that there is a little .exe sitting inside unity's folders that convert this .pdb to .mdb a format that unity understands it. BUT, there is a minor issue with this, the exe file is not working properly on visual studio 2015. Also, you will find that Visual Studio Tools does this conversion for you and again, you will find people who say it is not working all the time, meaning that some times, it will not convert pdb files and some times it does. In this video, we will write a littel asset post processor plugin that will handle this conversion for us all the time.
In previous video, we noticed that out vsix extension is not properly setting the assembly name and the name space of the project. In this video, we fix that issue.
It's time to start refactoring our code, Not only to make the project more manageable, but also to make it so that each goal could become a separate module that could be installed with the core system alone.
We continue our code refactoring, Still more jobs to be done before we get ready for our behavior tree system!
A short video describing what are delegates are and how to use them. In the next videos we will use delegates so I thought it's good to have a little descriptive video.
In this video we use delegates to push the job of our context menu and per tree tool bars to be owned by the tree system itself and separated from the node graph editor.
In this video, not only do we refactor our system more to fix a few issues and make it more robust, but also, we will start injecting a few lines of code here and there that will be used for behavior tree system later.
Last part of code refactoring! Also, adding code for sorting nodes based on their rect x component, this will be a needed feature when it comes to behavior trees.
Starting our behavior tree (finally) we do the normal approach of stubbing our classes first.
In this video, we will start putting functionality into our behavior trees graph and its schema classes.
Base node will have all the general functionality of every behavior tree node in it, so you can say that it's a special class that needs good attention. Also, start node, is the very first node that would be ticked and the node that would send back the result to the tree when the whole tree is evaluated.
Before we start adding nodes specific for any behavior tree, we have few issues and bugs that need to be fixed
In this video we start developing our behavior tree nodes. Our very first nodes are the simplest ones, success and failure nodes.
In this video we develop one of the essential nodes for any behavior tree system, the sequence node.
The selector node, hand in hand with sequence node can handle most of the logical transitions between the tree branches in any behavior tree node. In this video we develop our own selector node.
Wouldn't it be nice to be able to randomly tick a branch in the sequence or selector system? in this video we develop two other nodes, random sequence, and random selector.
Action nodes are the place that all the behavior happens, above any action node you can have a very complex tree using sequence and selector nodes, but, action nodes are where you do the real action and behaviors. In this episode we develop a few action nodes that are essential or life saving. The Action node itself; with counter, flip/flop, timer and condition nodes.
Unlike it's name, there is nothing related to multi-threading when we talk about parallel nodes. A parallel node is some what like a sequence node. The difference is, regardless of the result of previous child nodes status, this node will tick every one of its children and then decide if it should be a success or failure in condition itself when all of those children where ticked.
The ability to generate a random number and compare it to a predefined number is almost always part of artificial intelligence systems and logic. For example when you want to spawn an item in a reward chest and want to choose rarity to spawn an item from a data base. in that example, you would generate a random float and compare it to .3 for 30 percent chance rarity. In this video, we will develop the random probability node.
I can't think of any character that an AI system that won't implement line of sight in one way or another. In other words, almost always, you need to check if something is in your agent's field of view or not. Of course you can use a condition node and assign a delegate to it and get the job done but, wouldn't it be easier to have a node dedicated for this job to reduce the number of lines you as a user need to type every time you want something like this? In this episode we develop our Can See node.
We had a slight issue with our can see node, in this episode we will fix that issue.
Another one of useful nodes that will be a short cut for coding is to check if the agent is within distance of another position or not. in this video we develop our own WithinDistance node.
Another useful node that will check if the agent can here a sound or not. We will see how can we use a component's feature to determine the sound level at any given position.
Adding nodes to handle flow logic of the agent is always a good thing. In this video we will add three nodes that will help in that regard, Until Failure, Until Success and Repeat nodes.
Interrupt node is a very powerful node if it is being used properly. Using this node you can check for a condition to see if another child node should still contuned ticking or not.
Let's take a look at what the community gave us on Unitys wiki pages and use one of the most useful systems to send messages back and forth between any class without a need to connect them in any shape or form. Also, we will start developing our reflection node, one of the most powerful nodes to handle any reflection job you may need in your agent's AI
We continue our reflection node development in this video.
In this video we fix the problems we had with showing the meta data list for the tree itself.
In this video we finally finish our reflection node.