How about differentiating a bit more compared to past games in H5.G by adding more of a digital synthesized sound to the human A.I.'s speech patterns instead of having them sound exactly like another human being? They just don’t have our vocal cords;) Roland and Cortana, especially Roland, in H4 may have been even more interesting as characters if they had this artificial sound element added to their personality. It might just add more diversity and more immersion to the amazing Halo universe. The ghostly blue female character introducing the new WildStar launch trailer and the glowing green male character in the trailer I think is a wonderful example of what A.I.'s in the Halo universe could sound like with this effect.(By the way, anybody else see the similarities between the planet Nexus in WildStar to a Forerunner world and elements of the backstory of WildStar to that of the Halo universe. Pretty cool!) http://youtu.be/Auucdbb3l9c
I think the fact that the AIs sound human adds to the immersion. Don’t you think it’s possible that in 500 years, we’ll have the understanding and technology to be able to digitally emulate what vocal cords sound like?
> I think the fact that the AIs sound human adds to the immersion. Don’t you think it’s possible that in 500 years, we’ll have the understanding and technology to be able to digitally emulate what vocal cords sound like?
Basically this, although an AI could sound as human or inhuman as they like.
What I’d like to see is an option to toghle whether the Storm speaks human or Sangheili. In Sangheili, they could sound less machine and more organic, but English has them sound more synthesized, like to emphasize the fact that it’s a translation.
This would be nice, because I haven’t learned much beyond “Azuda wort” and “Dono’ingænen,” and I’m sick of trying to figure out what they’re saying.
> I think the fact that the AIs sound human adds to the immersion. Don’t you think it’s possible that in 500 years, we’ll have the understanding and technology to be able to digitally emulate what vocal cords sound like?
Thats one thing that actually bugs me about a lot of sci-fi, is how bad the audio/video quality of their recorders seem to always be.
I think the variation in the sound of A.I.s and differentiating them from the sound of people will add a more dynamic quality to the story. Dynamism in the sound definitely adds more of an interesting factor and sets characters apart, even to the point of enhancing these characters by creating a unique identity that we as spectators associate to them, drawing us further into the fictional universe that we find ourselves in. 343 Guilty Spark i think is a good example of this. His unique identifiable sound created such an interesting rich character that had great depth and individuality that i find it impossible to think of the Halo universe without him being there.
I think synthesizing AI voices at this point would pull me out of the story.
This could actually become an interesting character aspect.
We know some AIs, like BB, choose an avatar that is distinctly non-human to distance themselves from their human creators. AIs are based on human brains but they are seperate from and beyond humanity. So a particularly hostile or anti-social AI could deliberately choose to speak in a more synthetic, mechanical manner: if only to unnerve any people they interact with.
Auntie Dot did have a voice filter going, but she was a dumb AI.
I don’t know… The reason that I enjoy Halo’s AI so much is due to the fact that they have very human personalities. Even BB, who was noting but a floating cube, was very human in the way he spoke and acted. A big part of this humanity is in the way they speak, and their human-like voices. With a synthesized voice, they cannot portray emotion the same way that they do right now.
Putting in synthesized voices would only ruin their personality. Can you imagine Serena being sarcastic with a computer voice? It just doesn’t work.
> Putting in synthesized voices would only ruin their personality. Can you imagine Serena being sarcastic with a computer voice? It just doesn’t work.
Indeed, the whole point of smart AI’s sounding human is to push their humanness. They aren’t droids from Star Wars.
Spark’s voice distortion is minor, and can probably be attributed to being forerunner made, being old, and possibly running through a translator.
Hmm, I wouldn’t actually like that on a Smart A.I. Now if it was on a dumb A.I./V.I. I wouldn’t mind it.
It seems to me that one of the major themes in the Halo lore is about the relationships between civilizations and their AIs, and I think that the when comparing Forerunner attitudes towards their Ancillas (literally translates to an aid) versus how humans act, you’ll see that with the exception of Bornstellar/343 GS in Silentium, Forerunners don’t make friends with their AIs - or at least don’t treat them as friends most of the time. Besides, Bornstellar was a character who was supposed to break the norms of Forerunner society - so that just fits in even more perfectly.
On the other hand, Master Chief and Cortana are close friends, BB becomes close to Maggie and Kilo-5 in general, etc.
So yeah, that’s a long way of saying that in the Halo lore, human built smart AIs are, well, humanized.