App I want to make
I would like to make audio recognition mobile app like Shazam with
- Expo
- Expo AV(https://docs.expo.io/versions/latest/sdk/audio)
- Tensorflow serving
- Socket.IO
I want to send recording data to machine learning based recognition server via Socket.IO every second or every sample (Maybe it is too much to send data sample-rate times per second), and then mobile app receives and shows predicted result.
Problem
How to get data while recording from recordingInstance
? I read Expo audio document, but I couldn't figure out how to do it.
So far
I ran two example:
Now I want to mix two examples. Thank you for reading. If I could console.log
recording data, it would help much.
Related questions
https://forums.expo.io/t/measure-loudness-of-the-audio-in-realtime/18259
This might be impossible (to play animation? to get data realtime?)https://forums.expo.io/t/how-to-get-the-volume-while-recording-an-audio/44100
No answerhttps://forums.expo.io/t/stream-microphone-recording/4314
According to this question,
https://www.npmjs.com/package/react-native-recording
seems to be a solution, but it requires eject.
I think I found a good solution to this problem.
Above after creating and preparing for recording, I added a callback function that runs every 10 seconds.
The callback function checks if the duration is greater than 10 seconds and the difference between last duration is greater than 0, then sends the data through WebSocket.
Currently only problem, it doesn't run the callback the first time but runs the second time.