Can I use OpenEars (or other) to listen to & analyze speech from the Apple Watch microphone? -
is possible use openears (or other package) access speech apple watch's microphone?
i wish build app able listen speech, using watch's microphone, , spots specific keywords.
as of watchos 2.1, , ios 9, have been able propose, in 2 different ways:
option 1 - record wav file , upload asr server recorded , saved wav file apple watch. after uploaded file paid speech recognition provider , worked fine! here code record, replace ui updating lines of code (and debug ones) own:
//record audio sample var saveurl: nsurl? //this var initialized in awakewithcontext method// func recordaudio(){ let duration = nstimeinterval(5) let recordoptions = [wkaudiorecordercontrolleroptionsmaximumdurationkey : duration] // print("recording to: "+(saveurl?.description)!) //construct audio file url let filemanager = nsfilemanager.defaultmanager() let container = filemanager.containerurlforsecurityapplicationgroupidentifier("group.artivoice.applewatch"); let filename = "audio.wav" saveurl = container?.urlbyappendingpathcomponent(filename) presentaudiorecordercontrollerwithoutputurl(saveurl!, preset: .widebandspeech, options: recordoptions, completion: { saved, error in if let err = error { print(err.description) self.sendmessagetophone("recording error: "+err.description) } if saved { self.btnplay.setenabled(true) self.sendmessagetophone("audio saved successfully.") print("audio saved") self.uploadaudiosample() } }) }
option 2 - use iwatch's native speech recognition in approach take original, native voice menu, but...! don't add button options, pure asr. launched empty voice menu, , recover string returned asr. here's code, enjoy:
func launchiwatchvoicerecognition(){ //you can see empty array [], add options if suits self.presenttextinputcontrollerwithsuggestions([], allowedinputmode: wktextinputmode.plain, completion:{(results) -> void in let aresult = results?[0] as? string if(!(aresult == nil)){ print(aresult) //print result self.sendmessagetophone("native asr says: "+aresult!) dispatch_async(dispatch_get_main_queue()) { self.txtwatch.settext(aresult) //show result on ui } }//end if })//end show voice menu }
option 2 lightning fast, option 1 can more handy if want advanced speech recon functions (custom vocabularies, grammar...) recommend option 1 users. voila!! if need hints let me know!
Comments
Post a Comment