They are now all up and running, Speech recognition ratio is 95%, Speaker recognition does not make any more mistakes, it recognize its owner, and create new people when unknown automatically, and the deepchatbot passes most of Turing test, but … it kind of answer like a movie character (it is using one of the movie database for training.
So, this is when we see if I was totally insane to try to put this together, or not
I expect the merge to take 3 to 5 days.
Memory wise, it is all down around 1GB of RAM, all together, the chatbot is on Tensorflow light, Deepspeech is on HAckedCoreML, and speaker recognition uses the iPhone DSP and CoreML.
Little problem yet, CoreML is pretty slow at loading its models, and they do not cooperate well together when being large. They seem to be kicking each other out of the neural engine yet. I may be the dummy, so, I am tripple checking what is happening.
So, let’s see, was I insane to try to put all of this together?