Training a model to listen to OM chants
My parents especially my father reminds me a lot to do meditation, pranayam and overall exercise to remain fit mentally and physically. At home we used to sit together and do the same, chant the OM mantra trying to synchronize the sounds and experience the relaxation as well as natural aura. It sounds a lot fancy as I speak but ask someone who chants regularly (yes, I mean regularly) and he might say the same. Most times people try the chanting but not with the right expectations and mindset and end up ignoring it for life. They don’t give enough time before making a decision. Oh, I started giving a life lecture again! Coming back to the point…
If people are unable to get in touch with experts or in fact any person who can teach chanting, then it’s difficult for them to know whether they are doing it the right way or not. It requires good feedback for learning. This was the perfect job for an AI. Let it listen to thousands of chants, and we have a model. But we need to analyze some stuff first.
First thing I did was quickly opening audacity, recording a few chants and noting down the peak frequencies of the chant. We can split a chant to 3 parts, A (as in america), U (as in student) and M (humming) sounds
My peak frequencies (age 20, male)
- A : 126Hz
- U : 141Hz and 241Hz
- M : 132Hz and mid freqs filled 0-72Hz
I tried sample chants from the internet and measured it’s frequencies. Concluded that there is no direct frequency measurement possible.
Need to dig deeper to find more correlations