AMA with Rana Gujral, CEO of Behavioral Signals and Noonies Nominee

Hi Rana! What is the future role of emotion AI in health care improvement, if any of course?

What are some of the risks of using AI to interpret human emotions and how can they be overcomed? Thanks!

How do you think the moral norms in society might change if AI becomes self-aware?

Hey! Can you recommend some of the resources to learn more about emotional AI? Thanks!

What type of companies are the most profitable in terms of investment?

Would you rather teach a robot to diferentiate emotions or human to automate work processes? And why

Can you please share the story of Behavioral Signals founding?

Hey @RanaGujral thanks for taking the time. I’ve been very keen on AI’s potential impact on customer service, like how to prioritize inbound emails (and have been experimenting with how AI can order story submissions). I read that your company “Automatically matches each customer to the best-suited agent using voice data and emotion AI” — my question is taking a look at how machines can imitate humans — what is emotion AI’s major breakthrough of the last decade? And what will be the breakthrough in the next decade?

Hi! What is the statistical error of emotional AI apps in general? How much of an influence does it have?
Thanks

Hi @jonathan-coder - This is a great question and we can chat at length about what it takes to be a successful entrepreneur. I feel there are several elements of skills and experience that will determine your success but perhaps the most important aspect is “mindset”. You must start to think like an entrepreneur. Some of the ideas I have spoken about are listed in this brilliant summarization by Ernestine Siu:

https://ernestinebusiness.wixsite.com/imaginetalksblogs/post/ways-to-improve-client-communication

1 Like

Hi @osho - Great question! Conversational AI is the tech that allows computers to speak human and communicate the way we do. It’s not just one thing but instead a set of capabilities that allow inanimate systems to recognize human language and understand what is being said, and respond in a manner a human will respond. What makes a great conversational AI app different from an average app is the one that can enable computers to not just process “What” is being said but in addition" How" something is being said. This is what a human conversation is all about.

Hi @andy-connory - This is exactly what we do! It’s a complex methodology but in essence: Audio recordings are first processed to separate speech from noise or silence and then determine who spoke when. Next, we extract a rich set of interaction-level and speaking-style related features such as pause duration, speaking rate, and intonation-related characteristics, at various temporal resolutions. Overall, more than 600 features are used to represent an utterance in the recording in the domain of emotions and behaviors. Based on these features we then train a set of deep models to identify various emotional and behavioral states. Last, a separate set of models is trained on these emotional and behavioral outputs to predict the domain-specific KPIs.

Hey @irwin23 - One possible application would be to build capabilities that supports psychologists, by using this technology to capture the early onset of mental health issues, during interviews with patients. This can help psychologists in a more accurate diagnosis and to formulate a personalized targeted medical treatment, using real analytics to enhance their own diagnosis. This technology can capture the emotions in a voice without taking into account the context of what is being said; this can allow us to build software systems with anonymized voice data that respects and protects the privacy of the patient.

Hey @Kyle - I think the concern around using it in a social context is obviously privacy. I think that’s a legitimate concern that most people today in the connected ecosystem are worried about. We absolutely need to make sure that the privacy concerns are managed properly and that there are proper disclosures in place. We also need a novel approach to maintain this privacy. For example, our technology can capture the emotions in a voice without taking into account the context of what is being said; this allows us to work with anonymized voice data while respecting and protecting the privacy of the individual.

Hi @adam_lo - Stemming from research and a passion to bring our ground-breaking patented speech-to-emotion and speech-to-behaviors technologies to market, our founders Professors Alex Potamianos and Shri Narayanan started Behavioral Signals in 2016. We were spun out of the University of Southern California and research is still very much in our DNA!

hey @jack-maccourtney - I believe this varies greatly. At the core technology level, accuracy is measured on separate recordings of interactions. These interactions have been reviewed by human experts who have specifically identified any occurrences of human behaviors. We typically have multiple experts who need to agree on their observations (if they don’t, the recordings are considered ambiguous and they are not used for accuracy measurements). The engine then achieves high accuracy if it agrees with the human annotations. At Behavioral Signals, we’ve certainly come a long way. Our engine’s ability to deduce these signals are currently at around 97% of human ability (F Score of 0.78). We expect to end this year at 102% of human ability and in essence getting to be more accurate than an average human! For example, our engine is able to predict whether a sales call will be successful or not by analyzing the salesperson’s voice and making a prediction within the first 30 seconds of the call (with about 85% accuracy)!

Hi @brick - We can bet on two things. One is machines will become more and more intelligent and second, we will be relying on machines more and more for a lot of day-to-day things we do. We’re doing that today and this reliance is only going to increase. Those things are obvious and unstoppable. We, as a human race, are wanting to be more productive, we want to do different tasks than what we’ve been doing in the past, and we build tools for that, to automate tasks. With these two trends, with machines becoming more and more intelligent, and with us relying more and more on the machines, it almost becomes imperative that a super-intelligent machine that we are dependent on or we are interacting with has emotional intelligence. Entities with emotional intelligence are ethically and morally sound and they make more responsible decisions.

Hi @David - Thanks for the question. There are so many things to talk about, especially when it comes to how “machines can imitate humans”.

Emotional intelligence is a complicated science; We, as humans, project out a lot of affect signals in terms of how we’re feeling, whether it’s passion, anger or sadness, and there’s also a bunch of behavioral signals which are translated from the emotional cues which are “am I engaged or am I disengaged?” and so, when you talk about affect and emotional signals, we project that through a variety of different cues, as humans, we do it through our facial expressions, our body language, by saying something, or by not saying something and also by the tone of our voice. Our particular focus as a company has been around deducing emotions exclusively from the voice aspect or the speech aspects to it and the way we do it is through our focus on the tonality, not just what you’re saying but how you’re saying it and the emphasis behind the words and the specific pitch and focus around how you’re emphasizing a few things.

What we’re seeing now is that we’re interacting with machines more and more and it’s not just delegating a task for a machine to do but actually interacting with the machine and talking to it. When I’m interacting with a fellow human and the person is saying something to me and I’m saying something back, I’m not just cueing on what the person is saying, I’m also cueing on how the person saying it and trying to empathize with the cognitive state of mind of the person I’m speaking with, their feelings or the emotions behind the words they’re using. Today, that interaction is missing between a human and a machine, and as a result, a lot of these interactions don’t really have superior user experience; they’re just very transactional. Try it with your Alexa :). Our goal is to provide ability to these machines to be as good as humans when processing affect and the emotional state of mind so that they could be more relatable and have a much more user-engaged experience with a fellow human.

Hey @irwin23 - Those which are solving for a well-defined problem AND have the abilities (skills, access, resources) to execute on that vision.

1 Like

Hey @community_nick - We have tons of resources on our webpage. I’d start here: https://behavioralsignals.com/what-is-emotion-ai/

1 Like