Thrive Game Development

Development of the evolution game Thrive.
 
HomeHome  PortalPortal  CalendarCalendar  FAQFAQ  SearchSearch  MemberlistMemberlist  UsergroupsUsergroups  RegisterRegister  Log inLog in  
Welcome new and returning members!
If you're new, read around a bit before you post: the odds are we've already covered your suggestion.
If you want to join the development team, sign up and tell us why.
ADMIN is pleased to note that this marquee has finally been updated.
ADMIN reminds you that the Devblog is REQUIRED reading.
Currently: The Microbe Stage GUI is under heavy development
Log in
Username:
Password:
Log in automatically: 
:: I forgot my password
Quick Links
Website
/r/thrive
GitHub
FAQs
Wiki
New Posts
Search
 
 

Display results as :
 
Rechercher Advanced Search
Statistics
We have 1675 registered users
The newest registered user is dejo123

Our users have posted a total of 30851 messages in 1411 subjects
Who is online?
In total there are 3 users online :: 0 Registered, 0 Hidden and 3 Guests

None

Most users ever online was 443 on Sun Mar 17, 2013 5:41 pm
Latest topics
» THIS FORUM IS NOW OBSOLETE
by NickTheNick Sat Sep 26, 2015 10:26 pm

» To all the people who come here looking for thrive.
by NickTheNick Sat Sep 26, 2015 10:22 pm

» Build Error Code::Blocks / CMake
by crovea Tue Jul 28, 2015 5:28 pm

» Hello! I can translate in japanese
by tjwhale Thu Jul 02, 2015 7:23 pm

» On Leave (Offline thread)
by NickTheNick Wed Jul 01, 2015 12:20 am

» Devblog #14: A Brave New Forum
by NickTheNick Mon Jun 29, 2015 4:49 am

» Application for Programmer
by crovea Fri Jun 26, 2015 11:14 am

» Re-Reapplication
by The Creator Thu Jun 25, 2015 10:57 pm

» Application (programming)
by crovea Tue Jun 23, 2015 8:00 am

» Achieving Sapience
by MitochondriaBox Sun Jun 21, 2015 7:03 pm

» Microbe Stage GDD
by tjwhale Sat Jun 20, 2015 3:44 pm

» Application for Programmer/ Theorist
by tjwhale Wed Jun 17, 2015 9:56 am

» Application for a 3D Modeler.
by Kaiju4u Wed Jun 10, 2015 11:16 am

» Translator to Serbian here
by Simeartherist Sun Jun 07, 2015 6:36 am

» Presentation
by Othithu Tue Jun 02, 2015 10:38 am

» Application of Sorts
by crovea Sun May 31, 2015 5:06 pm

» want to contribute
by Renzope Sun May 31, 2015 12:58 pm

» Music List Thread (Post New Themes Here)
by Oliveriver Thu May 28, 2015 1:06 pm

» Application: English-Spanish translator
by Renzope Tue May 26, 2015 1:53 pm

» Want to be promoter or project manager
by TheBudderBros Sun May 24, 2015 9:00 pm


Share | 
 

 Achieving Sapience

View previous topic View next topic Go down 
AuthorMessage
NickTheNick
Overall Team Co-Lead
avatar

Posts : 2312
Reputation : 175
Join date : 2012-07-22
Age : 21
Location : Canada

PostSubject: Re: Achieving Sapience   Sat May 02, 2015 1:27 pm

Yeah I agree with taste being a text notification, and thus having smell being a visual indication.

Echolocation wouldn't work as a minimap because it would only give a small, two-dimensional view of your surroundings, whereas if it uses visual markers you could see it full sized and in all three dimensions around you.

The reason so many things will be visually represented is because vision will not necessarily be the dominant sense in many species. You could have an organism with a weak sense of vision, but a strong sense of smell, and so that player would rely heavily on the coloured markers to find food, escape threats, etc. It would be the same with echolocation, a species with strong echolocation will likely have weaker other senses, including sight, so again it would depend on the visual cues of the echolocation to track things. Thus there won't be too much overlap of the visually represented senses.

_________________
Look at how far we've come when people thought we'd get nowhere. Imagine how far we can go if we try to get somewhere.
Back to top Go down
View user profile
NickTheNick
Overall Team Co-Lead
avatar

Posts : 2312
Reputation : 175
Join date : 2012-07-22
Age : 21
Location : Canada

PostSubject: Re: Achieving Sapience   Mon May 25, 2015 3:45 am

I apologize for the absence. First I was waiting for replies, and then I got sick.

Since no one has yet responded, I'll take the chance to go over my approach to simulating vision in the game.

Part 2.1: Vision

This sense will be the most important to the player, simply because we mostly play games based off of what we see on the monitor. In Thrive, the player can play either in first person, where they sense everything as their organism senses it, or in third person, where they sense everything like a human, while still controlling their organism. I can imagine there would be a lot of problems with a third person mode to the organism mode, but for now we won't throw away the idea. We will stick to talking about first person for now.

The organ that will allow any form of sight for an organism is an eye. In Thrive's context, an eye is a collection of photoreceptive cells that translate electromagnetic waves into visual images, i.e. they see light. Any organism can evolve this primitive, first step towards vision. In the past there have been suggestions that there be many different eye organs, each with a different role and specialization and focus, but I think it will be both simpler and more elegant if we kept it all as one organ, an eye, and then allowed that organ to have its properties and limitations altered through evolution. The properties I can think of right off the bat for an eye on an organism would be:

Physical Properties
Size: Pretty self explanatory. Eyes will start as just a small patch on the skin, but can be shaped in the Organism Editor each generation. I'm not too sure how the shape of the eye should affect its cognitive properties, however.
Shape: Same as above.
Texture/Colour: Similar deal. Again, I don't know whether we plan to have the colour or texture of the eye affect its cognitive properties.
Energy Cost: It will cost a certain amount of nutrients to sustain, mostly dependent on its size.

Cognitive Properties
Clarity: Some sort of range over which the eye gives effectively clear vision, or if we want to keep it simpler, an overall clarity in the resolution of what you see.
"Zoom": Sort of like a hawk-eye adaptation, the ability to focus your vision on something far away, simulated in-game by zooming in your vision. We could also use zoom to focus on things really small.
Light range: A certain range of wavelengths you can see. Humans of course can't see ultraviolet, x-rays, gamma rays, or many other light wavelengths, and so too will your organism be limited in what it can see. The light range of your eyes could drastically change the appearance of the game.
Water Adaptability: How much or little being underwater impedes vision



As seen in this picture, the visuals of the game will literally change based on the light wavelength range of your organism's eyes.

Also, as tjwhale mentioned earlier, the eye would require a certain portion of the brain to be dedicated to it. Increasing its size or cognitive ability would require the available brainspace be present first. And not just any brainspace, specifically space in the Sensory Cortex of the brain (the part dedicated to sensation).

I feel like I'm missing something with this part, so make sure to point out anything I should add. Otherwise, other suggestions for how to simulate vision, or ideas for physical and cognitive properties of the eye are welcome (and kind of necessary, I can't keep responding to myself on this thread).

_________________
Look at how far we've come when people thought we'd get nowhere. Imagine how far we can go if we try to get somewhere.
Back to top Go down
View user profile
tjwhale
Theorist
avatar

Posts : 87
Reputation : 26
Join date : 2014-09-07

PostSubject: Re: Achieving Sapience   Mon May 25, 2015 5:05 am

Nice, this is largely what I was imagining too, which is a good feeling when it happens.

To add to this some good things to have would be

1) Field of vision - maybe you look through a little hole (or a set of little holes) rather than see the whole screen

2) Depth perception - we could mess around with the rendering to get a flatter image when you only have 1 eye - this would need some testing

3) Photo-sensitivity - having an iris that can dynamically alter the amount of light that comes in should be an option - if you don't have it you will be blinded by bright lights easily

4) in terms of processing it would be cool if you could add some features onto your eye - like automatic symbol recognition where it would tag animal tracks for you. Or motion sensing where anything moving would be highlighted really brightly.

5) I think in terms of eye size, from a physics perspective, the only problem would be if he aperture of the eye were of a similar size to the photon wave packet, then you would get scattering. Basically very small eyes would work fine.

It seems (from some naive googling) that eyes generally scale with body mass but vary from that with function. So nocturnal animals and birds have disproportionally big eyes for their mass, while reptiles tend to have small eyes.

6) Interestingly the sclera (the white of the eye) is hypothesized to be important in communication. Basically it makes people feel comfortable around you if they can see where you are looking. That could be an option we could offer, you could choose to have whites but that would mean you are at a disadvantage when fighting, but you gain an advantage when fraternising.

7) Changing the amount of rods and cones would be interesting too. Rods function better in low light but only do black and white. Cones can do colour and work better in strong light. It would be a prety cool moment in the game when you got your first cones.

Back to top Go down
View user profile
StealthStyle L
Newcomer
avatar

Posts : 72
Reputation : 7
Join date : 2014-06-05
Age : 20
Location : Behind you!!!

PostSubject: Re: Achieving Sapience   Mon May 25, 2015 5:58 am

I feel like this may have been talked about before bug this seems like a good place to confirm it. What happens if you have eyes that face in different directions? Would it be like a split-screen although perhaps without a solid line in the middle so it looks smoother and more realistic?

Oh also, depending on the level of detail, the eye would need to be connected to the brain via optic nerves. Therefore, you can't for example have an eye at the end of a tail, unless you wanna draw a connection all the way from there to the brain.

@tjwhale
I can't see a way to change the numbers of rods and cones unless you click on the eye and use sliders.
Back to top Go down
View user profile
moopli
Developer
avatar

Posts : 318
Reputation : 56
Join date : 2013-09-30
Age : 22
Location : hanging from the chandelier

PostSubject: Re: Achieving Sapience   Thu May 28, 2015 12:00 pm

I've written at length on this before -- on reddit, and in some other threads here. In essence, the question of color depends on your photoreceptors -- how many do you have, and what are their excitation spectra?

We humans have 4 photoreceptors -- 1 rod, and 3 cones:
(source)

For comparison, here are the photoreceptors of some other species:
(source)

Anyway, what are these useful for? Well, mathematically, here's what happens when we look at non-luminous objects:

  1. Light hits an object. The incident light has a specific spectrum, which we can't assume is flat. We can sample the spectrum with a vector of size n over [0, inf)^n.
  2. The object has a specific transmission spectrum (defining the light it would reflect/transmit if it were hit by a flat spectral source). This would be an n-vector over [0, 1]^n.
  3. A certain amount of light reflects off the object. Its spectrum would be the pairwise (or hadamard) product of the two spectra given above.
  4. That light (which is another vector over [0, inf)^n, as it's a light spectrum) then hits the eye in question.
  5. The eye contains k types of photoreceptor (where k < n). Each type of receptor has an excitation spectrum (another vector [0, 1]^n) which we dot with the incident light, giving us a k-vector representing the color which the organism sees.
  6. To account for the wide ranges of environments an organism might encounter, we can do a flat across-the-board scaling of image brightness here (think irises).
  7. To account for vastly differing levels of brightness within a single field of view, we can introduce a per-pixel intensity scale factor to simulate photoreceptor saturation. When a particular color value is overexcited, the scale factor drops proportionally to the overexcitation; and similarly, if the light hitting that photoreceptor dims, the reverse happens. Not only does this fix issues with vast differences in light intensity, but it also gives us realistic afterimages for free (since the saturation factor decays over time, quickly changing fields of view will lead to saturation factors that don't keep up), which is pretty cool.
  8. Finally, once scaled, we can convert the field of colors the organism perceives into a field of colors the computer can display. If the organism has 3 photoreceptors, the conversion is easy -- arbitrarily assign red to one, blue to another, and green to another. If the organism has more, we have to do a projection.


Obviously, this whole process is very expensive. A simplifying option I just came up with is to store only the peaks of each spectrum, and use some funky mathematical proofs to derive a cheaper way to multiply spectra.

Another simpler thing is the whole idea you brought up above with the "spectral range" stuff -- thing is, I'm not sure how this one works. What math is done in this case to convert light in the game to the light you see?

I guess the broader question is, do we try to model photoreceptors, or do we simplify drastically?
Back to top Go down
View user profile
tjwhale
Theorist
avatar

Posts : 87
Reputation : 26
Join date : 2014-09-07

PostSubject: Re: Achieving Sapience   Thu May 28, 2015 2:51 pm

1) I guess one way to make it cheaper might be to filter the light by your receptors first and then take it's product with all the objects in the environment.

So if you can only see 500 - 510 nm then only that light is calculated and all other light is discarded from the start.

2) When representing it we could take the visible spectrum (which we know how to display on the monitor), Violet - Blue - Cyan - Green - Yellow - Red and map that to whatever spectrum you can see.

So if you can see 500 - 510 nm then 500 is violet and 510 is red and everything else is in between.

This is a bit unexciting (as if you have 16 colour receptors you still just see RGB). Also if you can see 500-510 and 600-610 but nothing in between then your world is going to be violet and red and nothing in between, but maybe that's cool.

Another problem with this approach is if you add a new colour receptor then the colours of all the old objects you could see will change.

3) I guess when it comes to modelling each photoreceptor it depends how many you have. If you have 3 then we could model each on but if you have 3 million it's best not to.

This ties into gameplay, can you make a convincing game out of 3 photoreceptors? Will it be fun? I think there is an issue here that if your screen is just shades of red all over people will not want to play.

Should we make sure that somehow you always get a half-decent picture of the world so the game is definitely playable or should we be as accurate as possible?

The problem with having 2 modes, 1st person with filters and 3rd person with normal view is you will be highly incentivised to play in 3rd person all the time. Imagine having to hunt or fight with 3 pixels, you wouldn't bother unless you were forced to.

Though that is probably a testing issue. (and moreover an issue of why people are going to play the game in the first place, is it fun or a chance to see the world from a new perspective?)

4) Apologies for my ignorance but how is this stuff handled normally by the graphics card? So if I make a 3d scene with a light in it how are the colours displayed on the screen calculated? Can we piggy back on some of that machinery? If we can use the chipset it's going to speed things up a lot.

A friend of mine was making a lighting system and once he switched from python to opengl he got like a 1000x speed increase.

Is there a texture file which shades each polygon and then the lighting is applied to that and then the whole thing is stored in a depthbuffer? If so we might save a lot of maths by letting the chip do it.
Back to top Go down
View user profile
moopli
Developer
avatar

Posts : 318
Reputation : 56
Join date : 2013-09-30
Age : 22
Location : hanging from the chandelier

PostSubject: Re: Achieving Sapience   Fri May 29, 2015 3:26 pm

1) That's a possibility, but I'm not sure whether cropping the spectra and resizing would be cheaper or more expensive than always doing the same number of floating-point multiplications. There's also a bunch of precomputation we could do, trading space for time. it would be premature optimization until I try to program this, of course.

2) I don't like the idea of simply shifting the spectrum -- that's boring, more like being a human with night-vision goggles than another creature which sees colors completely differently. The same scene, under the same light, looking completely different, because pigments that look similar to one creature's eyes look very different to another. IMO, it's immensely cool, and could let us model, for example, organisms that look camouflaged to their natural predators, but stick out like a sore thumb to an organism with substantially different eyes.

Of course, it would be much simpler than coloring the world based on the actual light spectra. I think the massive drop in realism (and the associated ease of modeling it evolutionarily) would be too much of a loss though.

3) I think my meaning w.r.t. "3 photoreceptors" got lost in translation somewhere -- I was talking about the number of types of photoreceptor, ie, the dimensionality of the color space. Our eyes, and therefore our monitors and graphics cards and graphics libraries and image formats etc, have 3 dimensions in their color space (the 4th, due to rod cells, is ignored at high light levels).

I think it's reasonable that people would want to play in 3rd person if their organism has eyes too simple for a human to put up with. Of course, if you only have 3 photoreceptors, then you'd be almost necessarily a very small organism, using them for sensing light levels, and not for seeing images, so it's a moot point to try and see what they see. It would be kinda like first-person bacterial chemotaxis.

However, while it's reasonable, it seems like a bit of a cheat -- switching into 3rd-person essentially gives your organism a free human-like eye floating above its body, letting you take a step back in realism for the sake of playability, and making you think your species has better senses than it actually does. That could be the entire point of 3rd-person mode, actually -- until we have playable (fast, etc) first-person vision, and for people who don't want to use their organism's very strange eyesight, we have third-person mode.

4) It's complicated. The trick to getting this idea off the ground at all is to do as much as possible using shaders, precompute what we can, etc. Before that, though, a working prototype is needed.
Back to top Go down
View user profile
tjwhale
Theorist
avatar

Posts : 87
Reputation : 26
Join date : 2014-09-07

PostSubject: Re: Achieving Sapience   Sat May 30, 2015 4:46 pm

Can I ask a couple of clarification questions?

1) Are you thinking that the geometry of the scene will still be the same but the shaders will be different or are you thinking of something more complex?

Like if I am looking at a teapot with eye A then I see a red teapot and if I look with eye B then I see a blue teapot but I always see a teapot.

Or are you thinking of some sort of ray-tracing scheme where eye A sees a shadow which lets them know it's a teapot but eye B sees no shadow and thinks it's a lump with a handle?

2 ) What about light sources? Will there just be the sun(s) (and at night moons) to work with or will there be other things? Will there be dynamic shadows or is that too complex? (Like can you hide in the shadow of a tree and is that shadow calculated dynamically or always in the same place?)

Will there be reflected light as well and how many times?

-

This whole idea is really interesting. It's exciting to think how unique this system could be if we could wrangle it into something reasonable.

I guess what's a bit worrying is that it all has to be done at speed (at the refresh rate of the monitor to look convincing (though update speed of your eyes could be a variable for gameplay purposes)).

-

This is just an idle thought, nothing serious.

moopli wrote:
It would be kinda like first-person bacterial chemotaxis.

I actually really like the idea of "you see what your organism sees" for the microbe stage, so you just get given a list of proteins in your locality and you can spam proteins out. You can move (by powering one or some of your organelles) and touch stuff but you just get a touch notification and you can't see. It'll just give you some genetic information about what you interacted with, nothing more.

I know it's crazy. Just really fits with building up the moment when you first open your eyes being a powerful spiritual experience.
Back to top Go down
View user profile
NickTheNick
Overall Team Co-Lead
avatar

Posts : 2312
Reputation : 175
Join date : 2012-07-22
Age : 21
Location : Canada

PostSubject: Re: Achieving Sapience   Sun Jun 21, 2015 2:36 pm

I've been thinking about this for the past couple days and I realized that we have struck a pretty major problem with the game's design.

The Microbe Stage is in third person, because no one wants to play from the perspective of the cell (maybe a few people, we'll leave that to mods). The Strategy Mode is in third person. The transition from 2D to 3D during the Multicellular Stage is in third person. Yet somehow, we have to fit in the fact that the 3D part of the Multicellular Stage and the Aware Stage are in first person. I see many problems with this.

First off, being forced into first person after having played a whole stage and a half in third person is not very seamless at all, and probably a bit frustrating. Obviously your initial 3D organism would have sharp eyesight, and so as the player you will actually be losing the ability to see through the 2D-3D transition. Additionally, there is a large incentive to not evolve unique forms of sight or sensation that are significantly different from what humans perceive, because then you'll be stuck seeing everything in red or having eyes on the side of your head which will mess up your field of view or what have you. I know these issues were mentioned before, but I didn't realize how significant they were until thinking about them holistically. Although having the organism mode in third person takes away some of the immersion, and takes away the purpose of many of the ways you could evolve your perception, I think it's necessary given the consequences and inconsistencies of having it be first person. We could still fit in features like an ability to zoom with stronger eyes, or improving clarity, or maybe some indicators for things you can see outside of the visible light range.

I'm interested on hearing what you guys think about the issue. If the discussion becomes big enough, we could branch this off to its own thread and continue this thread with the next part.

_________________
Look at how far we've come when people thought we'd get nowhere. Imagine how far we can go if we try to get somewhere.
Back to top Go down
View user profile
MitochondriaBox
Learner
avatar

Posts : 188
Reputation : 7
Join date : 2013-01-29
Age : 18
Location : Houston, Texas

PostSubject: Re: Achieving Sapience   Sun Jun 21, 2015 7:03 pm

NickTheNick wrote:
First off, being forced into first person after having played a whole stage and a half in third person is not very seamless at all, and probably a bit frustrating. Obviously your initial 3D organism would have sharp eyesight, and so as the player you will actually be losing the ability to see through the 2D-3D transition. Additionally, there is a large incentive to not evolve unique forms of sight or sensation that are significantly different from what humans perceive, because then you'll be stuck seeing everything in red or having eyes on the side of your head which will mess up your field of view or what have you. I know these issues were mentioned before, but I didn't realize how significant they were until thinking about them holistically. Although having the organism mode in third person takes away some of the immersion, and takes away the purpose of many of the ways you could evolve your perception, I think it's necessary given the consequences and inconsistencies of having it be first person. We could still fit in features like an ability to zoom with stronger eyes, or improving clarity, or maybe some indicators for things you can see outside of the visible light range.

Hold on, the late Multicellular Stage and Aware Stage are primarily meant to be in 1st person? All the time I've been here, I gathered that 3rd person would be implemented by default, and 1st person would be an added feature or gimmick put in if possible, accessed by the press of a button when desired.

Then again, I've always based some assumptions of Thrive on the earlier versions of Spore, which always showed a 3rd person Creature Stage. Then again, the majority of the fanbase probably has, too; they want Thrive because they want Spore 2005 (it even rhymes), and there's no other game in the works like it out there.
Back to top Go down
View user profile
Sponsored content




PostSubject: Re: Achieving Sapience   

Back to top Go down
 
Achieving Sapience
View previous topic View next topic Back to top 
Page 1 of 1

Permissions in this forum:You cannot reply to topics in this forum
Thrive Game Development :: Development :: Design :: Gameplay Stages :: Aware-
Jump to: