General questions

The Anger/Dislike detector is for developers, businesses and researchers interested in understanding if there is anger, dislike or similar negative emotions oriented outwards in a voice file. See our Use Cases to accelerate your imagination.
We are offering a 30 day free trial period for anyone who is interested in voice-driven Emotions Analytics. Sign up here to get your free trial.
Like body language our vocal intonations are language and culture agnostic and to date we have analyzed more than 40 languages in 170 countries. Try your language now!

Technical Questions

This sample rate is the best frequency rate for voice analysis. This rate does not support too low and too high frequencies that are not relevant for human emotions, and the file size is very small which allows you to transfer it with minimum network usage. For free API keys we support only this bitrate for highest performance. If you wish to use files with higher frequency this will influence the speed of response, since the file size is larger and the network transmission speed is slower.
Wave PCM is the uncompressed format of audio recordings. The emotion recognition software examines very fine elements in the audio signal. These elements are lost when using low quality recording devices / noisy recording environment, and when signals are decoded from high compression encoders such as those used for Voice over IP. Bottom-line, the signal quality affects the recognition performance, thus it is recommended to use high quality signals.
Beyond Verbal’s recognition engine analyzes the voice signal using sliding window mechanism with a 10 seconds window size and 5 seconds overlap. Our research team came to conclusion that emotions is continuously changing process. Measuring emotions with consequential segments leaves the joints between segments without analysis. To provide more precise analysis that reflects continuous changes we decided to employ overlapping segments which analyzes 10 seconds segments with 5 seconds overlaps (shift) . This way odd segments measure emotions at the joints between even segments.
Our Emotions Analytics engine requires a minimum 10 seconds of good quality audio to produce a single analysis result with an overlap of 5 seconds. We highly recommend you read our Voice Input Guidelines before starting an analysis.
Yes, we support real time analyses or post bulk analysis. For real time analysis use HTTP chunked transfer encoding.
Currently our engine only analyzes a single speaker per session. If you have two speakers on separate channels you may analyze them on different sessions.
We provide simple RESTful API documentation and a bunch of sample codes for several platforms such as Android, IOS, .Net and JavaScript.
You are welcome to test our web based JavaScript demo application . Just upload your file or try one of our preloaded ones.
There could be a variety of different reason for not getting an analysis. The most common reason is, too little voice. Our engine requires a minimum of 10 seconds of audio to produce the first batch of analysis. This 10 second duration should exclude prolonged silence or background noise. Check out our Voice Input Guidelines for more useful tips on recording good quality audio.

Did not find what you are looking for? Send us your question to api@beyondverbal.com.