Statistics | We have 1675 registered users The newest registered user is dejo123
Our users have posted a total of 30851 messages in 1411 subjects
|
Who is online? | In total there are 12 users online :: 0 Registered, 0 Hidden and 12 Guests None Most users ever online was 443 on Sun Mar 17, 2013 5:41 pm |
Latest topics | » THIS FORUM IS NOW OBSOLETE by NickTheNick Sat Sep 26, 2015 10:26 pm
» To all the people who come here looking for thrive. by NickTheNick Sat Sep 26, 2015 10:22 pm
» Build Error Code::Blocks / CMake by crovea Tue Jul 28, 2015 5:28 pm
» Hello! I can translate in japanese by tjwhale Thu Jul 02, 2015 7:23 pm
» On Leave (Offline thread) by NickTheNick Wed Jul 01, 2015 12:20 am
» Devblog #14: A Brave New Forum by NickTheNick Mon Jun 29, 2015 4:49 am
» Application for Programmer by crovea Fri Jun 26, 2015 11:14 am
» Re-Reapplication by The Creator Thu Jun 25, 2015 10:57 pm
» Application (programming) by crovea Tue Jun 23, 2015 8:00 am
» Achieving Sapience by MitochondriaBox Sun Jun 21, 2015 7:03 pm
» Microbe Stage GDD by tjwhale Sat Jun 20, 2015 3:44 pm
» Application for Programmer/ Theorist by tjwhale Wed Jun 17, 2015 9:56 am
» Application for a 3D Modeler. by Kaiju4u Wed Jun 10, 2015 11:16 am
» Presentation by Othithu Tue Jun 02, 2015 10:38 am
» Application of Sorts by crovea Sun May 31, 2015 5:06 pm
» want to contribute by Renzope Sun May 31, 2015 12:58 pm
» Music List Thread (Post New Themes Here) by Oliveriver Thu May 28, 2015 1:06 pm
» Application: English-Spanish translator by Renzope Tue May 26, 2015 1:53 pm
» Want to be promoter or project manager by TheBudderBros Sun May 24, 2015 9:00 pm
» A new round of Forum Revamps! by Oliveriver Wed May 20, 2015 11:32 am
|
|
| Crash Course into AI | |
|
+3gdt1320 NickTheNick Daniferrito 7 posters | |
Author | Message |
---|
Daniferrito Experienced
Posts : 726 Reputation : 70 Join date : 2012-10-10 Age : 30 Location : Spain
| Subject: Re: Crash Course into AI Wed Jan 16, 2013 1:06 pm | |
| Ok, if statements for AI.
Let's assume you have an AI that runs only on if statements. For example, if there is food nearby, eat the food. For one element, we have one if statement.
If we add a second element into the equation, for example a predator, things get a bit more complicated. If there is food nearby, eat the food. If there is a predator nearby, run away from the predatod. Now, what happens when there is food and a predator, and both statements colide? We need a third if statement for that case. If there is food and a predator nearby, run away from the predator. Two elements, three if statements. You can alredy see where is the problem.
Lets go one step ahead. Now, we have a third element, oxigen, which the cell must gather. Other than the previous if statements (3 of them), we now need another that says: if there is oxigen nearby, go for the oxigen. Again, we have if that can colide, so we need extra ifs. if there is food and oxigen nearby, get the oxigen. If there is oxigen and a predator nearby, run away. If there is food, oxygen, and a predator nearby, fight the predator. One extra element, 4 extra ifs. now we are at 7.
The more you add, the more ifs you will need. if for one level of complexity you need n if statements, if you want to add an extra element, you will now need 2n +1 statements.
Of course you can simplify some of the ifs, but the problem is still there. Amount of if statements will grow exponentially.
However, with weighted functions, if you want to add an extra element, you just have to add the equation for it. Also, as it is a learning agent, it will behave like normal animals, learning what is good and what is bad on the go.
One more thing. Your If agent doesent care at all about food outside its "i want that food" radius. Mine does. If there is nothing around that is significant to the cell, it would go towards the nearest food.
i believe this is long enough for now. I will get back on this later, especially if you are still sceptical of the if agent being non practical. | |
| | | ido66667 Regular
Posts : 366 Reputation : 5 Join date : 2011-05-14 Age : 110 Location : Space - Time
| Subject: Re: Crash Course into AI Thu Jan 17, 2013 8:58 am | |
| - Daniferrito wrote:
- Ok, if statements for AI.
Let's assume you have an AI that runs only on if statements. For example, if there is food nearby, eat the food. For one element, we have one if statement.
If we add a second element into the equation, for example a predator, things get a bit more complicated. If there is food nearby, eat the food. If there is a predator nearby, run away from the predatod. Now, what happens when there is food and a predator, and both statements colide? We need a third if statement for that case. If there is food and a predator nearby, run away from the predator. Two elements, three if statements. You can alredy see where is the problem.
Lets go one step ahead. Now, we have a third element, oxigen, which the cell must gather. Other than the previous if statements (3 of them), we now need another that says: if there is oxigen nearby, go for the oxigen. Again, we have if that can colide, so we need extra ifs. if there is food and oxigen nearby, get the oxigen. If there is oxigen and a predator nearby, run away. If there is food, oxygen, and a predator nearby, fight the predator. One extra element, 4 extra ifs. now we are at 7.
The more you add, the more ifs you will need. if for one level of complexity you need n if statements, if you want to add an extra element, you will now need 2n +1 statements.
Of course you can simplify some of the ifs, but the problem is still there. Amount of if statements will grow exponentially.
However, with weighted functions, if you want to add an extra element, you just have to add the equation for it. Also, as it is a learning agent, it will behave like normal animals, learning what is good and what is bad on the go.
One more thing. Your If agent doesent care at all about food outside its "i want that food" radius. Mine does. If there is nothing around that is significant to the cell, it would go towards the nearest food.
i believe this is long enough for now. I will get back on this later, especially if you are still sceptical of the if agent being non practical. Do you have any other way to implent it? f(x, y) = ... Is not a part of c++0x standard. We can't just go around saying to the PC "I command you to use your thingy and create me my wighted function, and than magicly use it on some virtual cells."... Also, we can thread add more ifs. The fact that something is long doen't mean one can't make it. Also, I don't plan to add n elements. Your agent have infinte abillity to spot food? BTW, do have a plan how to create that learning agent? P.S. c++ is not based on AOP so don't think that agents can solve everything... Thery are just useful tools. Also our project will heavely use OOP. | |
| | | Daniferrito Experienced
Posts : 726 Reputation : 70 Join date : 2012-10-10 Age : 30 Location : Spain
| Subject: Re: Crash Course into AI Thu Jan 17, 2013 11:21 am | |
| Every single programing language that i know of can do something like f(x, y), whether you refer to having two input argunents or two outputs (althrough the two outputs are a bit harder in some languages). All the way from the lowest level (assembly language, like mips) to the highest levels (like python). Of course we have to give the computer the functions. What we dont need to give is the weights. It will learn those by himself, depending on the rewards it gets. Basically, what it does, is: for every possible action (we give him the actions he is alowed to do in a determined state, i'm just asking help about this point), calculate the value of the weighted functions, multiply each one by its weight, and choose the output with the highest total value. In order to learn the weights, and extra step is needed. Each time an action finishes, it recieves a reward (probably in the shape of food, or any other thing). Eating gives lots of food, and moving takes away some food (negative reward). Then, it adjusts every weight. Wi = Wi + alpha*(correction)*Fi(s,a), with the correction = (r+max(Q(s',a')))-Q(s,a). Wi is the weight number i, and Fi is the weighted function number i (they are suposed to be linked). - Quote :
- Also, we can thread add more ifs.
I dont get it. And yes, i have a plan on how to create that learning agent. It's here. And i have alredy created one (a pacman agent). And aspect-oriented programming is a particular vision on object-oriented programming, but it is still object-oriented programming. I dont see how it affects this, through. | |
| | | ido66667 Regular
Posts : 366 Reputation : 5 Join date : 2011-05-14 Age : 110 Location : Space - Time
| Subject: Re: Crash Course into AI Thu Jan 17, 2013 1:31 pm | |
| - Daniferrito wrote:
- Every single programing language that i know of can do something like f(x, y), whether you refer to having two input argunents or two outputs (althrough the two outputs are a bit harder in some languages). All the way from the lowest level (assembly language, like mips) to the highest levels (like python).
Of course we have to give the computer the functions. What we dont need to give is the weights. It will learn those by himself, depending on the rewards it gets.
Basically, what it does, is: for every possible action (we give him the actions he is alowed to do in a determined state, i'm just asking help about this point), calculate the value of the weighted functions, multiply each one by its weight, and choose the output with the highest total value.
In order to learn the weights, and extra step is needed. Each time an action finishes, it recieves a reward (probably in the shape of food, or any other thing). Eating gives lots of food, and moving takes away some food (negative reward). Then, it adjusts every weight. Wi = Wi + alpha*(correction)*Fi(s,a), with the correction = (r+max(Q(s',a')))-Q(s,a). Wi is the weight number i, and Fi is the weighted function number i (they are suposed to be linked).
- Quote :
- Also, we can thread add more ifs.
I dont get it.
And yes, i have a plan on how to create that learning agent. It's here. And i have alredy created one (a pacman agent).
And aspect-oriented programming is a particular vision on object-oriented programming, but it is still object-oriented programming. I dont see how it affects this, through. First, we can't say vague or very abstract things in functions, so probably in c++ it will come down to if, else, goto, switch, case, break, while, do-while, for, continue and exit. Second, by "No f=(x, y)" I meant, no purely mathematical functions, it must contain some control statements and BTW, if you convert the math notation to C++ x and y in the declaration will be just arguments... Third, I meant we can nest if and else or any other control statements. The above can be fixed and we could covert your model to real C++ except: Now, your learning model can work very well in pacman, so feel free, but in thrive it will encounter some fundamental issues, I will explain it in the dry way, and than with some analogy: 1. Due to the fact that computers and other machines can't think very abstractly, very vaguely, philosophically and don't have any biological impulse to breed, survive, eat, etc. they don't have any sense of what is "bad" and what is "good" for a biological creature (Survive, breed, eat...) therefore to have any good learning AI the programmer must first, define what is good for the cell and what is bad for it, and in thrive alot of things can happen, so we can't just define what is "good" and what is "bad" for every possible situation, or else when some cell encounter one of this situations, the AI will stop working and will not respond to that situation. 2. Now we have another problem, the computer don't know to connect and understand any connection between events, one can't just define that dying is "bad", and than the computer will just avoid stuff that have high chance of leading to death... The programmer will need to define what is "bad" and what is "good" specificity in any situation and in details, and than we are back to 1... 3. Cells and non - sentient creatures that don't have a brain don't really "learn", they "learn" through neutral selection, because the individuals that act "dumb" die, and the ones who act "smart" thrive, because of that, to have a good learning AI, we need to apply auto - evolution on the AI itself, that will make things much more complex... creatures that have a brain, but are not very intelligent really learn some things, but other things they "learn" like cells. 4. We treat cell species as a whole (Seregon's population dynamics and compound system), except the individuals that are near the player, so a really good AI won't have much affect on the player... except if we will apply the AI to whole species (Making things very complex). That fundamental problem was once addressed (In much less polite manner than how I address now) by banshee (A very talented programmer) in his last thread (similar to roadkillguy's last thread(Also a talented programmer)) but in the context of auto-evo that was than ill - formulated. Analogy time: Programmers are not like "Horse whisperers", but are like normal horse riders, they can't whisper to the horse vague and very abstract commands and the horse will do so... They have to give the horse specific commands... But computers are much more "dumb" than horses. P.S. AOP is in fact a paradigm it self, but in c++ it is used with OOP, much like functional programming, we can use functions in classes, but that doesn't mean that functions originated in OOP... I said it just so you won't get too "excited" with Agents. | |
| | | Daniferrito Experienced
Posts : 726 Reputation : 70 Join date : 2012-10-10 Age : 30 Location : Spain
| Subject: Re: Crash Course into AI Thu Jan 17, 2013 4:31 pm | |
| 1 - Yes, but we can tell them to maximize a value. From a given state, each action will have a value, and the agent will perform the action with the greatest value. Those values are calculated like so: Q(s,a) = W1*F1(s,a)+W2*F2(s,a)+...+Wn*Fn(s,a) Fn(s,a) are the things that we're discusing right now. They are functions that we code in before-hand. - Spoiler:
let's say we only have one function, defined like so: F1(s,a) = 1/1+distanceToClosestFood And its weight (W1) is something possitive, like 1. The agent will calculate all Q(s,a), wich are predictions on what will happen if it tries to perform action a. For all given actions, it can calculate where will it finish after the action is done. From there, it can calculate distanceToClosestFood, plug that value into F1(s,a), and calculate the resoult. Then it compares all Q(s,a), and return the action whose Q(s,a) is higher.
Long story short: the agent will conclude that the best action is the one that brings it closer to the nearest food.
2 - The computer learns the conections. Thats what the second part of the algorithm is there for. Its the part that learns. Let me explain: When an agent gets a reward, all weights are adjusted, and the quantity of the adjustement is determined by how big its function was at the moment. For example, a cells passes through a bit of food. Right when it is on top, it recieves a positive reward. Then, all weights are adjusted on this. The weighted functions will look like this: F1 (food) = 1/1+distanceToClosestFood = 1/1+0 = 1 -> W1+=alpha*reward*F1 = alpha*reward*1 F2 (enemy) = 1/1+distanceToClosestEnemy = 1/1+99 = 0.01 -> W2+=alpha*reward*F2 = alpha*reward*0.01 That means that both W1 and W2 will go up, but W1 will go up 100 times more. In the case of an enemy eating the cell, it will look similar, but now the reward will be negative, and as F2 will be higher to F1, W2 will go down much faster than W1. This way, the agent is effectively learning that being close to food is good, and being close to enemies is bad. It is effectively learning the conection between being close to food and eating. 3 - We have alredy discussed this a bit (around post 11 on this thread, between scio and me), and have concluded that all agents from the same specie will share weights (W). This effectivelly means some sort of racial memory, or behavior. It is a bit innacurate, but it is the closer we can get. - Spoiler:
The ideal situation would be to have separated weight for all individuals, that are recieved on birth with a little randomization, and then the ones that randomly get more suitable weights survive to pass those traits to its childrens. However, that would require to have thousand (if not more) individuals, each one being unique and having to store it in memory separatedly.
4 - Same as 3 You mean Bashinerox in its thread Why Auto-Evo is Dead? I kind of agree with that thread, but i dont think this is the same thing. The reasons are in this thread. This system is well defined. This system has alredy been done succesfully. | |
| | | ido66667 Regular
Posts : 366 Reputation : 5 Join date : 2011-05-14 Age : 110 Location : Space - Time
| Subject: Re: Crash Course into AI Thu Jan 17, 2013 7:13 pm | |
| - Daniferrito wrote:
- 1 - Yes, but we can tell them to maximize a value. From a given state, each action will have a value, and the agent will perform the action with the greatest value. Those values are calculated like so:
Q(s,a) = W1*F1(s,a)+W2*F2(s,a)+...+Wn*Fn(s,a)
Fn(s,a) are the things that we're discusing right now. They are functions that we code in before-hand.
- Spoiler:
let's say we only have one function, defined like so: F1(s,a) = 1/1+distanceToClosestFood And its weight (W1) is something possitive, like 1. The agent will calculate all Q(s,a), wich are predictions on what will happen if it tries to perform action a. For all given actions, it can calculate where will it finish after the action is done. From there, it can calculate distanceToClosestFood, plug that value into F1(s,a), and calculate the resoult. Then it compares all Q(s,a), and return the action whose Q(s,a) is higher.
Long story short: the agent will conclude that the best action is the one that brings it closer to the nearest food.
2 - The computer learns the conections. Thats what the second part of the algorithm is there for. Its the part that learns. Let me explain:
When an agent gets a reward, all weights are adjusted, and the quantity of the adjustement is determined by how big its function was at the moment. For example, a cells passes through a bit of food. Right when it is on top, it recieves a positive reward. Then, all weights are adjusted on this. The weighted functions will look like this: F1 (food) = 1/1+distanceToClosestFood = 1/1+0 = 1 -> W1+=alpha*reward*F1 = alpha*reward*1 F2 (enemy) = 1/1+distanceToClosestEnemy = 1/1+99 = 0.01 -> W2+=alpha*reward*F2 = alpha*reward*0.01
That means that both W1 and W2 will go up, but W1 will go up 100 times more.
In the case of an enemy eating the cell, it will look similar, but now the reward will be negative, and as F2 will be higher to F1, W2 will go down much faster than W1.
This way, the agent is effectively learning that being close to food is good, and being close to enemies is bad. It is effectively learning the conection between being close to food and eating.
3 - We have alredy discussed this a bit (around post 11 on this thread, between scio and me), and have concluded that all agents from the same specie will share weights (W). This effectivelly means some sort of racial memory, or behavior. It is a bit innacurate, but it is the closer we can get.
- Spoiler:
The ideal situation would be to have separated weight for all individuals, that are recieved on birth with a little randomization, and then the ones that randomly get more suitable weights survive to pass those traits to its childrens. However, that would require to have thousand (if not more) individuals, each one being unique and having to store it in memory separatedly.
4 - Same as 3
You mean Bashinerox in its thread Why Auto-Evo is Dead? I kind of agree with that thread, but i dont think this is the same thing. The reasons are in this thread. This system is well defined. This system has alredy been done succesfully. Your system is good, but we need to define every state and action, so the computer will be able to make the connection between the reward and the action. Also, does it have any affect on Population dynamics system? | |
| | | Daniferrito Experienced
Posts : 726 Reputation : 70 Join date : 2012-10-10 Age : 30 Location : Spain
| Subject: Re: Crash Course into AI Thu Jan 17, 2013 7:34 pm | |
| We dont have to define each state and action pair. That what the weighted functions (features) are for. We extract from there the things interenting to us and plug them into the functions (if we have a state, its really easy to calculate the distance from the cell we care about to the nearest food). Then, each state is simplified to these features. We only have to tell him how to extract those features.
And no, as far as i know, this doesen't affect population dynamics, althrough i think there is some way they could work together. For example, looking at the weights that link diferent species, we can determine how different species interact with each other. for example: Species A, that could feed on species B, just doesent even try to hunt species B because species B learnt to run away, and species A cant run as fast, and so, trying to hunt them is a waste of energy.
I hope that wasnt too obscure (can't find a better word in english) | |
| | | ido66667 Regular
Posts : 366 Reputation : 5 Join date : 2011-05-14 Age : 110 Location : Space - Time
| Subject: Re: Crash Course into AI Thu Jan 17, 2013 7:44 pm | |
| - Daniferrito wrote:
- We dont have to define each state and action pair. That what the weighted functions (features) are for. We extract from there the things interenting to us and plug them into the functions (if we have a state, its really easy to calculate the distance from the cell we care about to the nearest food). Then, each state is simplified to these features. We only have to tell him how to extract those features.
And no, as far as i know, this doesen't affect population dynamics, althrough i think there is some way they could work together. For example, looking at the weights that link diferent species, we can determine how different species interact with each other. for example: Species A, that could feed on species B, just doesent even try to hunt species B because species B learnt to run away, and species A cant run as fast, and so, trying to hunt them is a waste of energy.
I hope that wasnt too obscure (can't find a better word in english) I get it now, thanks for clearification | |
| | | NickTheNick Overall Team Co-Lead
Posts : 2312 Reputation : 175 Join date : 2012-07-22 Age : 28 Location : Canada
| Subject: Re: Crash Course into AI Thu Jan 17, 2013 10:39 pm | |
| Now this level of the conversation may be beyond my understanding, but I do have some comments to make. - ido66667 wrote:
- Now, your learning model can work very well in pacman, so feel free, but in thrive it will encounter some fundamental issues,
Ido, please, refrain from being condescending. - ido66667 wrote:
- 3. Cells and non - sentient creatures that don't have a brain don't really "learn", they "learn" through neutral selection,
Bear in mind that creatures can learn basic concepts. For example, if a cell goes near another cell and gets eaten, it will learn not to go near that cell. Also, I couldn't ignore this, it is called Natural Selection. - ido66667 wrote:
- That fundamental problem was once addressed (In much less polite manner than how I address now) by banshee (A very talented programmer)
I wouldn't call your explanation that much more polite. You were quite rude in certain areas, not just in this post but earlier ones, and you omitted to read many of Dani's earlier posts. By the way, his name is Bashinerox, not banshee. - ido66667 wrote:
- Analogy time:
Programmers are not like "Horse whisperers", but are like normal horse riders, they can't whisper to the horse vague and very abstract commands and the horse will do so... They have to give the horse specific commands... But computers are much more "dumb" than horses. Ido, you are a relative beginner to programming, not to say I am any better myself. Posting analogies and jokes pertaining to code don't increase our perception of your capabilities. It is also slightly offensive, I would think, to explain to Dani how coding works in such a simple way when he is clearly a more experienced programmer than yourself. Remember what I've said, we appreciate your contributions only so long as you refrain from becoming insulting, or start asserting your knowledge of whatever fields onto others. | |
| | | Ionstorm Newcomer
Posts : 11 Reputation : 1 Join date : 2013-01-17 Location : Cambridge
| Subject: Re: Crash Course into AI Fri Jan 18, 2013 2:17 pm | |
| From what I see here I think that one way of implementing this system in code would be to have one AI manager that controls the AI and an species class that stores weights of goals. Like AiMan->Speices World->Update->BannanaSlug->Update->Find closest object with highest weight with distance having exponent falloff in control e.g. close objects have a much higher control even if less important weight wise.
There must be more ways than this but this is all I can think of. | |
| | | ido66667 Regular
Posts : 366 Reputation : 5 Join date : 2011-05-14 Age : 110 Location : Space - Time
| Subject: Re: Crash Course into AI Fri Jan 18, 2013 3:19 pm | |
| - NickTheNick wrote:
- Now this level of the conversation may be beyond my understanding, but I do have some comments to make.
- ido66667 wrote:
- Now, your learning model can work very well in pacman, so feel free, but in thrive it will encounter some fundamental issues,
Ido, please, refrain from being condescending.
- ido66667 wrote:
- 3. Cells and non - sentient creatures that don't have a brain don't really "learn", they "learn" through neutral selection,
Bear in mind that creatures can learn basic concepts. For example, if a cell goes near another cell and gets eaten, it will learn not to go near that cell.
Also, I couldn't ignore this, it is called Natural Selection.
- ido66667 wrote:
- That fundamental problem was once addressed (In much less polite manner than how I address now) by banshee (A very talented programmer)
I wouldn't call your explanation that much more polite. You were quite rude in certain areas, not just in this post but earlier ones, and you omitted to read many of Dani's earlier posts.
By the way, his name is Bashinerox, not banshee.
- ido66667 wrote:
- Analogy time:
Programmers are not like "Horse whisperers", but are like normal horse riders, they can't whisper to the horse vague and very abstract commands and the horse will do so... They have to give the horse specific commands... But computers are much more "dumb" than horses. Ido, you are a relative beginner to programming, not to say I am any better myself. Posting analogies and jokes pertaining to code don't increase our perception of your capabilities. It is also slightly offensive, I would think, to explain to Dani how coding works in such a simple way when he is clearly a more experienced programmer than yourself.
Remember what I've said, we appreciate your contributions only so long as you refrain from becoming insulting, or start asserting your knowledge of whatever fields onto others. Look, while he is better than me, that does't take away my rights to ask/criticise or question his things as long as I don't use personal attacks (Argue ageinst a proposition by attacking )... I questioned his proportion not him. I used analogies and jokes because this is my way to explaine things, if one my analogies is wrong feel free the to point it out... If you don't find my jokes funny, that just says you have a different taste of humor, and that okay. I thought that there was a fundamental problem, he explained, and I said that I was wrong. I think that right now you qustioned me, myself... Not what my Arguements was, I also feel (Correct me if I am wrong) that you suggest that I had a bad intent in mind, in fact when I written this post, I tried to avoid any insulting stuff, if I did inaulted Dani, I am sorry. Not to mention that is completly off topic and to avoid "Threadjacking" this and if you want to continue this unrealated argument PM me. | |
| | | NickTheNick Overall Team Co-Lead
Posts : 2312 Reputation : 175 Join date : 2012-07-22 Age : 28 Location : Canada
| Subject: Re: Crash Course into AI Fri Jan 18, 2013 6:41 pm | |
| - ido66667 wrote:
- as long as I don't use personal attacks (Argue ageinst a proposition by attacking )...
Good, but you were bordering on it. - ido66667 wrote:
- If you don't find my jokes funny, that
just says you have a different taste of humor, and that okay. It's not to do with the taste of the joke, it was the intent behind it. - ido66667 wrote:
- in fact when I written this post, I tried to
avoid any insulting stuff, if I did inaulted Dani, I am sorry. Good to see your intentions are pure. Apologies are always appreciated. - ido66667 wrote:
- to avoid "Threadjacking"
this and if you want to continue this unrealated argument PM me. Do not try to dismiss this, it is pertaining to you and is important to address. @Dani: How would the path-finding of the AI deal with a dynamic environment? Also, how would they retain more natural paths? For example, if a cell senses some food behind an obstruction, it will normally walk into the obstruction, then slide to the left or right along the obstruction until it is no longer in the way and then it goes straight for the food. This is because AI pathfinding usually prevents them from backtracking. - Spoiler:
However, with a more natural pathfinding system, the cell takes a route around the wall, without just walking into it and sliding across it, and then curves around and goes for the food. - Spoiler:
Would this work with the pacman system? | |
| | | Ionstorm Newcomer
Posts : 11 Reputation : 1 Join date : 2013-01-17 Location : Cambridge
| Subject: Re: Crash Course into AI Fri Jan 18, 2013 7:50 pm | |
| Not meaning to butt in here but Nick the first example used a sub-par path finding system by using heuristics( rule of thumb) to work out a fast way of calculating the path - by working along walls to decrease the processing power needed.
The second one shows a non optimized method that could be found using an a* path finding algorithm or a navigation mesh with spline curves.
For example with an a* Path finding algorithmic you could have a grid that you could walk on, then when anything sits on top it in game sets points on the grid you can't go to.
With a navigation mesh you change the navigation mesh on the fly and then you work out a path that stays within your movable area.
In summary you could have the nice curved moment path but i would use more processing power as the computer would need to work out more extensive options of paths is could follow then choose the nice curvy one.
I hope this gives an idea. Over to you dani | |
| | | Daniferrito Experienced
Posts : 726 Reputation : 70 Join date : 2012-10-10 Age : 30 Location : Spain
| Subject: Re: Crash Course into AI Fri Jan 18, 2013 9:19 pm | |
| Well, I actually though that we didnt need any pathfinding for cell stage. Other than other cells (wich are more-or-less covered by basic AI), what else can get in the way of a cell?
Anyway, if we need pathfinding, the AI algorith only needs the distance (the real distance, having to go around obstacles), not the exact path. However, in order to know the real distance, we need to calculate the path.
Unless the environement is really, really simple, just a heuristic (what is called a greedy algorythm) wont make a good search algorithm for pathfinding. It gets stuck in places too much (In nick's example, it would only try to go through the obstacle, not around it). We need some kind of A*. I personally dont know hoy A* works with anything other than a discrete space, but we can easily simplify our problem to a discrete space. I never heard about spline curves before, and even less on how can they be integrated into pathfinding.
About dynamic environement, the easiest way would be just to calculate the path for the current environement, and recalculate it again when it changes (we actually need to recalculate it anyway)
What a pathfinding algorithm usually finds is the second option.
I believe that there is alredy some libraries that can do this for us, as it is a very usual problem.
Actually, Ido, I love that you sugest things. Actually, to be fair, something like your idea would the best AI for the first release of microbe stage. The only problem is scalability (each element added is increasingly harder) and that you must know all the things from before (for a pair of somewhat evolved cells, you dont know beforehand how they would interact, and if cell A shoud fear cell B, or if it should be the other way around). Please, keep adding ideas. | |
| | | Ionstorm Newcomer
Posts : 11 Reputation : 1 Join date : 2013-01-17 Location : Cambridge
| Subject: Re: Crash Course into AI Sat Jan 19, 2013 6:56 am | |
| Ok, yes dani what about other objects in the space like food clumps that could be in the way that it has to navigate around.
The way an a* search could work is by having the objects occupying a position in the world that would be turned into an bounding box esque obstacle on the mesh like: X = 2.5 of object Y = 5.3 of object then take these coordinates and then take the size and turn that into a grid space of with 0s free space and 1s occupied in this example 000000 the grid that the a* works on 001100 000000 000000 then moving onto 000000 000110 000000 000000 and so forth. but with something like a navigation mesh with an area that you can move in, you find points that have a fast way of getting to the target then send that through a splining algorithm that just smooths it out into a bendy line and then check if it still occupies the nav mesh and then your good to go. Remember google is your best friend | |
| | | Daniferrito Experienced
Posts : 726 Reputation : 70 Join date : 2012-10-10 Age : 30 Location : Spain
| Subject: Re: Crash Course into AI Sat Jan 19, 2013 8:26 am | |
| If there is food in the way, you probably want to eat it anyway, so i dont know how can it be a problem. The problem with simplifying it to a grid is that in the game, cells can move in any direction they want, but in a grid, only 4 directions are alowed. That means that for the cell represented by a 2, going for the food represented as 3, all paths marked with ones are the same lenght. - Code:
-
111111113 100000111 100001101 100011001 100110001 101100001 111000001 211111111 | |
| | | Ionstorm Newcomer
Posts : 11 Reputation : 1 Join date : 2013-01-17 Location : Cambridge
| Subject: Re: Crash Course into AI Sat Jan 19, 2013 2:10 pm | |
| With the 'same length' problem you could just use it as an array of 2d vectors or steps - then the one with the smallest quantity of steps would be the best for example these are vectors of moment for each update (1,0)(1,0)(1,0)(1,0)(1,0)(1,0)(1,0)(1,0)(0,1)(0,1)(0,1)(0,1)(0,1)(0,1)(0,1) is worse than (1,1)(1,1)(1,1)(1,1)(1,1)(1,1)(1,1)(1,0) as you could move on a diagonal in a grid representation
One thing that path finding would be useful is if the floor has outcrops in cell stage?.
I agree that the a* + grid method is not very useful so i suggest we use an navigation mesh for any path finding we do as it gives an area we could move around in then we just make the most realistic movement within that space. | |
| | | Daniferrito Experienced
Posts : 726 Reputation : 70 Join date : 2012-10-10 Age : 30 Location : Spain
| Subject: Re: Crash Course into AI Sat Jan 19, 2013 11:12 pm | |
| the vector thing (or something like that) could help, but it's not the definitive solution. It is just givin the cell 8 directions instead of 4, so it only aliviates the problem. It might be enough, though. Actually, the vector (1,1) is 1.4 longer than the vector (1,0), so the total amount of vectors is not right. The right way is the sum of all the vectors' modules.
I believe navigation meshes are only the base for pathfinding, we still need some pathfinding algorithm for it. Also, the only work well for static obstacles, as recalculating them is long and hard.
One more thing. I still dont know what could get in the way of a cell that calls for pathfinding (at least any other pathfinding than a straight line). Any obstacle that i can thing of is so big compared to a cell that things obstructed by the obstacle are outside the range that the cell can "see". For a comparison in human size, it would be like a tower of a few kilometers in radius. And thats the smallest i can think of. | |
| | | Ionstorm Newcomer
Posts : 11 Reputation : 1 Join date : 2013-01-17 Location : Cambridge
| Subject: Re: Crash Course into AI Sun Jan 20, 2013 5:40 am | |
| Why not use a array of hexagons?
OK so for summary we have discuss that the AI will have some class for a species that controls the weight of interacting with something.
We have worked out that in cell stage we will not need a path finding system as such as they will pick somewhere to go and then go as the crow flies.
I am also doing small bits of code at this this fork with a game play thing and a ascii art interface. https: http://github.com/Ionsto/Thrive.git remove the space after the 'https:' - new member can't post links.
I'm not sure what else to add? | |
| | | Daniferrito Experienced
Posts : 726 Reputation : 70 Join date : 2012-10-10 Age : 30 Location : Spain
| Subject: Re: Crash Course into AI Sun Jan 20, 2013 12:38 pm | |
| I dont like the idea of hexagons for pathfinding. They are a bit harder to code than a straight grid, and only allow for 6 directions of movement. I believe just a grid and adding different movements with each different cost is easier, and a bit better.
AI (at least the described here) doesent choose one thing to go for and then stick to that. It makes decission all the time.
I'm looking at your prototype, and it looks good so far. I didnt have any luck compiling it yet. Probably because the compiler i was using (in my laptop) is quite bad. I'll have a go at it later. However, I dont really like your render loop, it's quite inefficient. It should work, though. | |
| | | Ionstorm Newcomer
Posts : 11 Reputation : 1 Join date : 2013-01-17 Location : Cambridge
| Subject: Re: Crash Course into AI Sun Jan 20, 2013 5:12 pm | |
| I was coding it based on memory of an html-javascript ascii game I made. I would not think it would it would compile as I coded the rendering loop while sleep deprived. I don't think we will need to have an path finding system in the cell stage and then I am not sure what to use for anything else - but now i'm going to go and code. | |
| | | Daniferrito Experienced
Posts : 726 Reputation : 70 Join date : 2012-10-10 Age : 30 Location : Spain
| Subject: Re: Crash Course into AI Tue Jan 22, 2013 12:42 pm | |
| I was thinking about it, and i remembered another kind of function that AI needs (remember Last thing i did was asking for help about what functions are needed.). A different kind of function.
F(s,a) = eating
Actually, the full function would bee eatingfoodtypeA, so each foodtype can have different weights.
This function's value will be 0 if not true (not eating) or 1 if true (eating). As simple as that.
Some other functions that i can think of right now are isGettingAttacked or isUnderSunlight.
For isGettingAttacked we will have to assume that just passing next to the other cell or animal means we will get attacked, even through it wont always be like so. That will mean that agents will sometimes risk passing near to a rival if the expected reward is high enough. | |
| | | NickTheNick Overall Team Co-Lead
Posts : 2312 Reputation : 175 Join date : 2012-07-22 Age : 28 Location : Canada
| Subject: Re: Crash Course into AI Mon Apr 01, 2013 11:43 am | |
| Dani, I read over the past posts, and I hope this hasn't already been mentioned, but how would the organism's level of intelligence (aka brain size or development in game terms) affect their ability to react through the AI.
For example, if organism A sees organism B go for a drink at a watering hole, but then organism C emerges from the water and catches B in its jaws. Organism A, I would imagine, would either go to the watering hole anyways due to poor intelligence, or recognize the danger and not go due to higher intelligence.
I guess the first and more important question is, how will an AI learn from mistakes, not of itself, but of those it observes.
And secondly, how will the organism brain development affect their decision making. | |
| | | Daniferrito Experienced
Posts : 726 Reputation : 70 Join date : 2012-10-10 Age : 30 Location : Spain
| Subject: Re: Crash Course into AI Mon Apr 01, 2013 12:23 pm | |
| A learning AI can learn from other agents it is not controlling. As long as it sees the other agent action and its outcomes, it learns, no mater if the second agent is just being dumb, or if it is doing the best actions. The problem is that that only applies if both agents are really similar. Let's put a few examples:
Case A: A shark sees a fish out on the open. It eats the fish. What it learns is that being out on the open got the fish killed, so it is bad. Thats dumb. It seems that it only applies if both animals are on the same throphic level. Let's see another example:
Case B: Now we have three animals: a cat, a small bird and a mice. Both the small bird and the mice are in danger of being eaten by the cat. The mice sees the bird escape the cat by flying, so it learns that flying is good for escaping. it tries to fly and gets eaten. The bird saw the mice escape the cat by runing into a small hole. It tries to do the same and gets eaten.
As you see, learning from other agent with different caracteristics can bring dumb reactions. I agree that with situations that can apply to the same agents it can be good, but tracing the line of where the AI is learning good or bad behaviours is really hard.
I dont know how diferent brains could affect this system. This simulates more instincts, not reasoned actions. Maybe a better brain can override this system at some times, or unlock new actions for the creature (like using a stick for getting ants out of its colony). | |
| | | NickTheNick Overall Team Co-Lead
Posts : 2312 Reputation : 175 Join date : 2012-07-22 Age : 28 Location : Canada
| Subject: Re: Crash Course into AI Tue Apr 02, 2013 10:31 pm | |
| Ahh, I see that it is a very complex situation. What would you suggest then? | |
| | | Sponsored content
| Subject: Re: Crash Course into AI | |
| |
| | | | Crash Course into AI | |
|
Similar topics | |
|
| Permissions in this forum: | You cannot reply to topics in this forum
| |
| |
| |