diff --git a/CHANGELOG.md b/CHANGELOG.md index 8e284a5..92ad51c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -30,4 +30,8 @@ * Fix onCancel nil error ## [1.0.10] - 2023-06-01 -* Improved hot restart behaviour on Android \ No newline at end of file +* Improved hot restart behaviour on Android + +## [1.1.0] - 2023-08-03 +* Improved API +* Changed native recorders \ No newline at end of file diff --git a/README.md b/README.md index 58adf91..c19e762 100644 --- a/README.md +++ b/README.md @@ -1,37 +1,153 @@ -# flutter-voice-processor +# Flutter Voice Processor + +[![GitHub release](https://img.shields.io/github/release/Picovoice/flutter-voice-processor.svg)](https://github.com/Picovoice/flutter-voice-processor/releases) +[![GitHub](https://img.shields.io/github/license/Picovoice/flutter-voice-processor)](https://github.com/Picovoice/flutter-voice-processor/) + +[![Pub Version](https://img.shields.io/pub/v/flutter_voice_processor)](https://pub.dev/packages/flutter_voice_processor) Made in Vancouver, Canada by [Picovoice](https://picovoice.ai) -A Flutter plugin for real-time voice processing. + +[![Twitter URL](https://img.shields.io/twitter/url?label=%40AiPicovoice&style=social&url=https%3A%2F%2Ftwitter.com%2FAiPicovoice)](https://twitter.com/AiPicovoice) + +[![YouTube Channel Views](https://img.shields.io/youtube/channel/views/UCAdi9sTCXLosG1XeqDwLx7w?label=YouTube&style=social)](https://www.youtube.com/channel/UCAdi9sTCXLosG1XeqDwLx7w) + +The Flutter Voice Processor is an asynchronous audio capture library designed for real-time audio +processing on mobile devices. Given some specifications, the library delivers frames of raw audio +data to the user via listeners. + +## Table of Contents + +- [Flutter Voice Processor](#flutter-voice-processor) + - [Table of Contents](#table-of-contents) + - [Requirements](#requirements) + - [Compatibility](#compatibility) + - [Installation](#installation) + - [Permissions](#permissions) + - [Usage](#usage) + - [Capturing with Multiple Listeners](#capturing-with-multiple-listeners) + - [Example](#example) + +## Requirements + +- [Flutter SDK](https://docs.flutter.dev/get-started/install) +- [Android SDK](https://developer.android.com/about/versions/12/setup-sdk) (21+) +- [JDK](https://www.oracle.com/java/technologies/downloads/) (8+) +- [Xcode](https://developer.apple.com/xcode/) (11+) +- [CocoaPods](https://cocoapods.org/) + +## Compatibility + +- Flutter 1.20.0+ +- Android 5.0+ (API 21+) +- iOS 11.0+ + +## Installation + +Flutter Voice Processor is available via [pub.dev](https://pub.dev/packages/flutter_voice_processor). +To import it into your Flutter project, add the following line to your `pubspec.yaml`: +```yaml +dependencies: + flutter_voice_processor: ^ +``` + +## Permissions + +To enable recording with the hardware's microphone, you must first ensure that you have enabled the proper permission on both iOS and Android. + +On iOS, open the `Info.plist` file and add the following line: +```xml +NSMicrophoneUsageDescription +[Permission explanation] +``` + +On Android, open the `AndroidManifest.xml` and add the following line: +```xml + +``` + +See our [example app](./example) for how to properly request this permission from your users. ## Usage -Create: +Access the singleton instance of `VoiceProcessor`: + ```dart -int frameLength = 512; -int sampleRate = 16000; -VoiceProcessor _voiceProcessor = VoiceProcessor.getVoiceProcessor(frameLength, sampleRate); -Function _removeListener = _voiceProcessor.addListener((buffer) { - print("Listener received buffer of size ${buffer.length}!"); -}); +import 'package:flutter_voice_processor/flutter_voice_processor.dart'; +VoiceProcessor? _voiceProcessor = VoiceProcessor.instance; ``` -Start audio: +Add listeners for audio frames and errors: + ```dart -try { - if (await _voiceProcessor.hasRecordAudioPermission()) { - await _voiceProcessor.start(); - } else { - print("Recording permission not granted"); +VoiceProcessorFrameListener frameListener = (List frame) { + // use audio +} + +VoiceProcessorErrorListener errorListener = (VoiceProcessorException error) { + // handle error +} + +_voiceProcessor?.addFrameListener(frameListener); +_voiceProcessor?.addErrorListener(errorListener); +``` + +Ask for audio record permission and start recording with the desired frame length and audio sample rate: + +```dart +final int frameLength = 512; +final int sampleRate = 16000; +if (await _voiceProcessor?.hasRecordAudioPermission() ?? false) { + try { + await _voiceProcessor?.start(frameLength, sampleRate); + } on PlatformException catch (ex) { + // handle start error } +} else { + // user did not grant permission +} +``` + +Stop audio capture: +```dart +try { + await _voiceProcessor?.stop(); } on PlatformException catch (ex) { - print("Failed to start recorder: " + ex.toString()); + // handle stop error } ``` -Stop audio: +Once audio capture has started successfully, any frame listeners assigned to the `VoiceProcessor` will start receiving audio frames with the given `frameLength` and `sampleRate`. + +### Capturing with Multiple Listeners + +Any number of listeners can be added to and removed from the `VoiceProcessor` instance. However, +the instance can only record audio with a single audio configuration (`frameLength` and `sampleRate`), +which all listeners will receive once a call to `start()` has been made. To add multiple listeners: ```dart -await _voiceProcessor.stop(); -_removeListener(); -``` \ No newline at end of file +VoiceProcessorFrameListener listener1 = (frame) { } +VoiceProcessorFrameListener listener2 = (frame) { } +List listeners = [listener1, listener2]; +_voiceProcessor?.addFrameListeners(listeners); + +_voiceProcessor?.removeFrameListeners(listeners); +// or +_voiceProcessor?.clearFrameListeners(); +``` + +## Example + +The [Flutter Voice Processor app](./example) demonstrates how to ask for user permissions and capture output from the `VoiceProcessor`. + +## Releases + +### v1.1.0 - August 4, 2023 +- Numerous API improvements +- Error handling improvements +- Allow for multiple listeners instead of a single callback function +- Upgrades to testing infrastructure and example app + +### v1.0.0 - December 8, 2020 + +- Initial public release. diff --git a/android/build.gradle b/android/build.gradle index bc27df8..2d2279c 100644 --- a/android/build.gradle +++ b/android/build.gradle @@ -4,7 +4,7 @@ version '1.0' buildscript { repositories { google() - jcenter() + mavenCentral() } dependencies { @@ -15,7 +15,7 @@ buildscript { rootProject.allprojects { repositories { google() - jcenter() + mavenCentral() } } @@ -27,7 +27,20 @@ android { defaultConfig { minSdkVersion 21 } + buildTypes { + release { + minifyEnabled false + } + } lintOptions { - disable 'InvalidPackage' + disable 'GradleCompatible' + } + compileOptions { + sourceCompatibility JavaVersion.VERSION_1_8 + targetCompatibility JavaVersion.VERSION_1_8 } } + +dependencies { + implementation 'ai.picovoice:android-voice-processor:1.0.2' +} diff --git a/android/gradle/wrapper/gradle-wrapper.properties b/android/gradle/wrapper/gradle-wrapper.properties index 01a286e..d687be6 100644 --- a/android/gradle/wrapper/gradle-wrapper.properties +++ b/android/gradle/wrapper/gradle-wrapper.properties @@ -1,5 +1,6 @@ +#Mon Jul 31 11:07:10 PDT 2023 distributionBase=GRADLE_USER_HOME +distributionUrl=https\://services.gradle.org/distributions/gradle-6.5-bin.zip distributionPath=wrapper/dists -zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists -distributionUrl=https\://services.gradle.org/distributions/gradle-5.6.2-all.zip +zipStoreBase=GRADLE_USER_HOME diff --git a/android/gradlew b/android/gradlew old mode 100644 new mode 100755 diff --git a/android/src/main/java/ai/picovoice/flutter/voiceprocessor/FlutterVoiceProcessorHandler.java b/android/src/main/java/ai/picovoice/flutter/voiceprocessor/FlutterVoiceProcessorHandler.java index 938e7ae..ac6a39c 100644 --- a/android/src/main/java/ai/picovoice/flutter/voiceprocessor/FlutterVoiceProcessorHandler.java +++ b/android/src/main/java/ai/picovoice/flutter/voiceprocessor/FlutterVoiceProcessorHandler.java @@ -14,22 +14,18 @@ import android.Manifest; import android.app.Activity; import android.content.pm.PackageManager; -import android.media.AudioFormat; -import android.media.AudioRecord; -import android.media.MediaRecorder; import android.os.Handler; import android.os.Looper; -import android.os.Process; -import android.util.Log; import androidx.annotation.NonNull; import androidx.core.app.ActivityCompat; import java.util.ArrayList; -import java.util.concurrent.Callable; -import java.util.concurrent.Executors; -import java.util.concurrent.atomic.AtomicBoolean; +import ai.picovoice.android.voiceprocessor.VoiceProcessor; +import ai.picovoice.android.voiceprocessor.VoiceProcessorErrorListener; +import ai.picovoice.android.voiceprocessor.VoiceProcessorException; +import ai.picovoice.android.voiceprocessor.VoiceProcessorFrameListener; import io.flutter.plugin.common.EventChannel.EventSink; import io.flutter.plugin.common.EventChannel.StreamHandler; import io.flutter.plugin.common.MethodCall; @@ -38,271 +34,238 @@ import io.flutter.plugin.common.PluginRegistry; public class FlutterVoiceProcessorHandler - implements - MethodCallHandler, - StreamHandler, - PluginRegistry.RequestPermissionsResultListener { + implements + MethodCallHandler, + StreamHandler, + PluginRegistry.RequestPermissionsResultListener { - private static final String LOG_TAG = "FlutterVoiceProcessorPlugin"; - private static final int RECORD_AUDIO_REQUEST_CODE = - FlutterVoiceProcessorHandler.class.hashCode(); + private static final String LOG_TAG = "FlutterVoiceProcessorPlugin"; + private static final int RECORD_AUDIO_REQUEST_CODE = + FlutterVoiceProcessorHandler.class.hashCode(); - private final AtomicBoolean isCapturingAudio = new AtomicBoolean(false); - private final AtomicBoolean stopRequested = new AtomicBoolean(false); + private final Activity activity; + private final VoiceProcessor voiceProcessor; + private final Handler eventHandler = new Handler(Looper.getMainLooper()); + private Result pendingPermissionResult; + private Result pendingStartRecordResult; + private Result pendingStopRecordResult; - private final Activity activity; - private final Handler eventHandler = new Handler(Looper.getMainLooper()); - private Result pendingPermissionResult; - private Result pendingStartRecordResult; - private Result pendingStopRecordResult; - private EventSink bufferEventSink; - private EventSink errorEventSink; + private EventSink frameEventSink; + private EventSink errorEventSink; - FlutterVoiceProcessorHandler(Activity activity) { - this.activity = activity; - } - - void close() { - stop(); - pendingPermissionResult = null; - pendingStartRecordResult = null; - pendingStopRecordResult = null; - } - - @Override - public void onMethodCall(@NonNull MethodCall call, @NonNull Result result) { - switch (call.method) { - case "start": - if ( - !(call.argument("frameLength") instanceof Integer) || - !(call.argument("sampleRate") instanceof Integer) - ) { - result.error( - "PV_INVALID_ARGUMENT", - "Invalid argument provided to VoiceProcessor.start", - null - ); - break; - } + FlutterVoiceProcessorHandler(Activity activity) { + this.activity = activity; + this.voiceProcessor = VoiceProcessor.getInstance(); + this.voiceProcessor.addErrorListener(new VoiceProcessorErrorListener() { + @Override + public void onError(VoiceProcessorException error) { + eventHandler.post( + new Runnable() { + @Override + public void run() { + if (pendingStartRecordResult != null) { + pendingStartRecordResult.error( + "PV_AUDIO_RECORDER_ERROR", + "Unable to start audio recording: " + error, + null + ); + pendingStartRecordResult = null; + } else if (errorEventSink != null) { + errorEventSink.success("PV_AUDIO_RECORDER_ERROR: " + error); + } + } + } + ); + } + }); + this.voiceProcessor.addFrameListener(new VoiceProcessorFrameListener() { + @Override + public void onFrame(final short[] frame) { + eventHandler.post( + new Runnable() { + @Override + public void run() { + if (frameEventSink != null) { + final ArrayList frameArrayList = new ArrayList<>(); + for (short sample : frame) { + frameArrayList.add(sample); + } + frameEventSink.success(frameArrayList); + } + } + } + ); + } + }); + } - final Integer frameLength = call.argument("frameLength"); - final Integer sampleRate = call.argument("sampleRate"); - pendingStartRecordResult = result; - start(frameLength, sampleRate); - break; - case "stop": - pendingStopRecordResult = result; + void close() { stop(); - break; - case "hasRecordAudioPermission": - checkRecordAudioPermission(result); - break; - default: - result.notImplemented(); + pendingPermissionResult = null; + pendingStartRecordResult = null; + pendingStopRecordResult = null; } - } - @Override - public void onListen(Object listener, EventSink eventSink) { - if (listener != null) { - String type = (String) listener; - if(type.equals("buffer")) { - bufferEventSink = eventSink; - } - else if (type.equals("error")) { - errorEventSink = eventSink; - } + @Override + public void onMethodCall(@NonNull MethodCall call, @NonNull Result result) { + switch (call.method) { + case "start": + if (!(call.argument("frameLength") instanceof Integer) || + !(call.argument("sampleRate") instanceof Integer)) { + result.error( + "PV_INVALID_ARGUMENT", + "Invalid argument provided to VoiceProcessor.start", + null + ); + break; + } + + final Integer frameLength = call.argument("frameLength"); + final Integer sampleRate = call.argument("sampleRate"); + pendingStartRecordResult = result; + start(frameLength, sampleRate); + break; + case "stop": + pendingStopRecordResult = result; + stop(); + break; + case "isRecording": + result.success(voiceProcessor.getIsRecording()); + break; + case "hasRecordAudioPermission": + checkRecordAudioPermission(result); + break; + default: + result.notImplemented(); + } } - } - @Override - public void onCancel(Object listener) { - if (listener != null) { - String type = (String) listener; - if(type.equals("buffer")) { - bufferEventSink = null; - } - else if (type.equals("error")) { - errorEventSink = null; - } + @Override + public void onListen(Object listener, EventSink eventSink) { + if (listener != null) { + String type = (String) listener; + if (type.equals("frame")) { + frameEventSink = eventSink; + } else if (type.equals("error")) { + errorEventSink = eventSink; + } + } } - } - public void start(final Integer frameSize, final Integer sampleRate) { - if (isCapturingAudio.get()) { - if (pendingStartRecordResult != null) { - pendingStartRecordResult.success(true); - pendingStartRecordResult = null; - } - return; + @Override + public void onCancel(Object listener) { + if (listener != null) { + String type = (String) listener; + if (type.equals("frame")) { + frameEventSink = null; + } else if (type.equals("error")) { + errorEventSink = null; + } + } } - Executors - .newSingleThreadExecutor() - .submit( - new Callable() { - @Override - public Void call() { - android.os.Process.setThreadPriority( - Process.THREAD_PRIORITY_URGENT_AUDIO + public void start(final Integer frameLength, final Integer sampleRate) { + try { + voiceProcessor.start(frameLength, sampleRate); + eventHandler.post( + new Runnable() { + @Override + public void run() { + if (pendingStartRecordResult != null) { + pendingStartRecordResult.success(true); + pendingStartRecordResult = null; + } + } + } + ); + } catch (VoiceProcessorException e) { + eventHandler.post( + new Runnable() { + @Override + public void run() { + if (pendingStartRecordResult != null) { + pendingStartRecordResult.error( + "PV_AUDIO_RECORDER_ERROR", + "Unable to start audio recording: " + e, + null + ); + pendingStartRecordResult = null; + } else if (errorEventSink != null) { + errorEventSink.success("PV_AUDIO_RECORDER_ERROR: " + e); + } + } + } ); - read(frameSize, sampleRate); - return null; - } } - ); - } - - public void stop() { - if (!isCapturingAudio.get()) { - return; } - stopRequested.set(true); - } - - private void checkRecordAudioPermission(@NonNull Result result) { - boolean isPermissionGranted = - ActivityCompat.checkSelfPermission( - activity, - Manifest.permission.RECORD_AUDIO - ) == - PackageManager.PERMISSION_GRANTED; - if (!isPermissionGranted) { - pendingPermissionResult = result; - ActivityCompat.requestPermissions( - activity, - new String[] { Manifest.permission.RECORD_AUDIO }, - RECORD_AUDIO_REQUEST_CODE - ); - } else { - result.success(true); + public void stop() { + try { + voiceProcessor.stop(); + eventHandler.post( + new Runnable() { + @Override + public void run() { + if (pendingStopRecordResult != null) { + pendingStopRecordResult.success(true); + pendingStopRecordResult = null; + } + } + } + ); + } catch (VoiceProcessorException e) { + eventHandler.post( + new Runnable() { + @Override + public void run() { + if (pendingStopRecordResult != null) { + pendingStopRecordResult.error( + "PV_AUDIO_RECORDER_ERROR", + "Unable to stop audio recording: " + e, + null + ); + pendingStopRecordResult = null; + } else if (errorEventSink != null) { + errorEventSink.success("PV_AUDIO_RECORDER_ERROR: " + e); + } + } + } + ); + } } - } - @Override - public boolean onRequestPermissionsResult( - int requestCode, - String[] permissions, - int[] grantResults - ) { - if (requestCode == RECORD_AUDIO_REQUEST_CODE) { - if (pendingPermissionResult != null) { - if ( - grantResults.length > 0 && - grantResults[0] == PackageManager.PERMISSION_GRANTED - ) { - pendingPermissionResult.success(true); + private void checkRecordAudioPermission(@NonNull Result result) { + if (!voiceProcessor.hasRecordAudioPermission(activity)) { + pendingPermissionResult = result; + ActivityCompat.requestPermissions( + activity, + new String[]{Manifest.permission.RECORD_AUDIO}, + RECORD_AUDIO_REQUEST_CODE + ); } else { - pendingPermissionResult.success(false); + result.success(true); } - pendingPermissionResult = null; - return true; - } } - return false; - } - - private void read(final Integer frameSize, final Integer sampleRate) { - final int minBufferSize = AudioRecord.getMinBufferSize( - sampleRate, - AudioFormat.CHANNEL_IN_MONO, - AudioFormat.ENCODING_PCM_16BIT - ); - final int bufferSize = Math.max(sampleRate / 2, minBufferSize); - final short[] buffer = new short[frameSize]; - AudioRecord audioRecord = null; - try { - audioRecord = - new AudioRecord( - MediaRecorder.AudioSource.MIC, - sampleRate, - AudioFormat.CHANNEL_IN_MONO, - AudioFormat.ENCODING_PCM_16BIT, - bufferSize - ); - - audioRecord.startRecording(); - boolean firstBuffer = true; - while (!stopRequested.get()) { - if (audioRecord.read(buffer, 0, buffer.length) == buffer.length) { - if (firstBuffer) { - isCapturingAudio.set(true); - firstBuffer = false; - - // report success to user - eventHandler.post( - new Runnable() { - @Override - public void run() { - if (pendingStartRecordResult != null) { - pendingStartRecordResult.success(true); - pendingStartRecordResult = null; - } + @Override + public boolean onRequestPermissionsResult( + int requestCode, + String[] permissions, + int[] grantResults + ) { + if (requestCode == RECORD_AUDIO_REQUEST_CODE) { + if (pendingPermissionResult != null) { + if (grantResults.length > 0 && + grantResults[0] == PackageManager.PERMISSION_GRANTED) { + pendingPermissionResult.success(true); + } else { + pendingPermissionResult.success(false); } - } - ); - } - - final ArrayList bufferObj = new ArrayList(); - for (int i = 0; i < buffer.length; i++) bufferObj.add(buffer[i]); - - // send buffer event - eventHandler.post( - new Runnable() { - @Override - public void run() { - if (bufferEventSink != null) { - bufferEventSink.success(bufferObj); - } - } - } - ); - } - } - - audioRecord.stop(); - } catch (final Exception e) { - // report exception to user - eventHandler.post( - new Runnable() { - @Override - public void run() { - if (pendingStartRecordResult != null) { - pendingStartRecordResult.error( - "PV_AUDIO_RECORDER_ERROR", - "Unable to start audio engine: " + e.toString(), - null - ); - pendingStartRecordResult = null; - } - else if (errorEventSink != null) { - errorEventSink.success("PV_AUDIO_RECORDER_ERROR: " + e.toString()); + pendingPermissionResult = null; + return true; } - } } - ); - } finally { - // report success to user - eventHandler.post( - new Runnable() { - @Override - public void run() { - if (pendingStopRecordResult != null) { - pendingStopRecordResult.success(true); - pendingStopRecordResult = null; - } - } - } - ); - - if (audioRecord != null) { - audioRecord.release(); - } - isCapturingAudio.set(false); - stopRequested.set(false); + return false; } - } } diff --git a/android/src/main/java/ai/picovoice/flutter/voiceprocessor/FlutterVoiceProcessorPlugin.java b/android/src/main/java/ai/picovoice/flutter/voiceprocessor/FlutterVoiceProcessorPlugin.java index 12bfd16..f38882a 100644 --- a/android/src/main/java/ai/picovoice/flutter/voiceprocessor/FlutterVoiceProcessorPlugin.java +++ b/android/src/main/java/ai/picovoice/flutter/voiceprocessor/FlutterVoiceProcessorPlugin.java @@ -25,7 +25,7 @@ public class FlutterVoiceProcessorPlugin private static final String LOG_TAG = "FlutterVoiceProcessorPlugin"; private MethodChannel methodChannel; - private EventChannel eventChannel; + private EventChannel frameEventChannel; private EventChannel errorEventChannel; private FlutterVoiceProcessorHandler voiceProcessorHandler; private FlutterPluginBinding pluginBinding; @@ -55,12 +55,12 @@ public void onAttachedToActivity(ActivityPluginBinding binding) { ); methodChannel.setMethodCallHandler(voiceProcessorHandler); - eventChannel = + frameEventChannel = new EventChannel( pluginBinding.getBinaryMessenger(), - "flutter_voice_processor_events" + "flutter_voice_processor_frame_events" ); - eventChannel.setStreamHandler(voiceProcessorHandler); + frameEventChannel.setStreamHandler(voiceProcessorHandler); errorEventChannel = new EventChannel( @@ -77,7 +77,7 @@ public void onDetachedFromActivity() { ); activityBinding = null; methodChannel.setMethodCallHandler(null); - eventChannel.setStreamHandler(null); + frameEventChannel.setStreamHandler(null); errorEventChannel.setStreamHandler(null); voiceProcessorHandler = null; methodChannel = null; diff --git a/example/README.md b/example/README.md index 0fd1469..e3727a4 100644 --- a/example/README.md +++ b/example/README.md @@ -1,3 +1,36 @@ -# flutter_voice_processor_example +# Flutter Voice Processor Example -Demonstrates how to use the flutter_voice_processor plugin. \ No newline at end of file +This is an example app that demonstrates how to ask for user permissions and capture output from +the `VoiceProcessor`. + +## Requirements + +- [Flutter SDK](https://docs.flutter.dev/get-started/install) +- [Android SDK](https://developer.android.com/about/versions/12/setup-sdk) (21+) +- [JDK](https://www.oracle.com/java/technologies/downloads/) (8+) +- [Xcode](https://developer.apple.com/xcode/) (11+) +- [CocoaPods](https://cocoapods.org/) + +## Compatibility + +- Flutter 1.20.0+ +- Android 5.0+ (API 21+) +- iOS 11.0+ + +## Building + +Install dependencies and setup environment: + +```console +cd example +flutter pub get +``` + +Connect a mobile device or launch a simulator. Then build and run the app: +```console +flutter run +``` + +## Usage + +Toggle recording on and off with the button in the center of the screen. While recording, the VU meter on the screen will respond to the volume of incoming audio. diff --git a/example/ios/Flutter/AppFrameworkInfo.plist b/example/ios/Flutter/AppFrameworkInfo.plist index 6b4c0f7..f2872cf 100644 --- a/example/ios/Flutter/AppFrameworkInfo.plist +++ b/example/ios/Flutter/AppFrameworkInfo.plist @@ -21,6 +21,6 @@ CFBundleVersion 1.0 MinimumOSVersion - 8.0 + 9.0 diff --git a/example/ios/Podfile b/example/ios/Podfile index 1e8c3c9..313ea4a 100644 --- a/example/ios/Podfile +++ b/example/ios/Podfile @@ -1,5 +1,5 @@ # Uncomment this line to define a global platform for your project -# platform :ios, '9.0' +platform :ios, '11.0' # CocoaPods analytics sends network stats synchronously affecting flutter build latency. ENV['COCOAPODS_DISABLE_STATS'] = 'true' diff --git a/example/ios/Podfile.lock b/example/ios/Podfile.lock index 34d6423..e5fee38 100644 --- a/example/ios/Podfile.lock +++ b/example/ios/Podfile.lock @@ -1,12 +1,18 @@ PODS: - Flutter (1.0.0) - - flutter_voice_processor (1.0.0): + - flutter_voice_processor (1.1.0): - Flutter + - ios-voice-processor (~> 1.1.0) + - ios-voice-processor (1.1.0) DEPENDENCIES: - Flutter (from `Flutter`) - flutter_voice_processor (from `.symlinks/plugins/flutter_voice_processor/ios`) +SPEC REPOS: + trunk: + - ios-voice-processor + EXTERNAL SOURCES: Flutter: :path: Flutter @@ -14,9 +20,10 @@ EXTERNAL SOURCES: :path: ".symlinks/plugins/flutter_voice_processor/ios" SPEC CHECKSUMS: - Flutter: 0e3d915762c693b495b44d77113d4970485de6ec - flutter_voice_processor: b4295092e5365b517b75f2af026c05d897ce4450 + Flutter: 50d75fe2f02b26cc09d224853bb45737f8b3214a + flutter_voice_processor: 53afbf59ad3feb82f4a379fea9ed8dc98495210f + ios-voice-processor: 8e32d7f980a06d392d128ef1cd19cf6ddcaca3c1 -PODFILE CHECKSUM: aafe91acc616949ddb318b77800a7f51bffa2a4c +PODFILE CHECKSUM: 7368163408c647b7eb699d0d788ba6718e18fb8d -COCOAPODS: 1.10.0 +COCOAPODS: 1.11.3 diff --git a/example/ios/Runner.xcodeproj/project.pbxproj b/example/ios/Runner.xcodeproj/project.pbxproj index 0bc766c..3b36b0e 100644 --- a/example/ios/Runner.xcodeproj/project.pbxproj +++ b/example/ios/Runner.xcodeproj/project.pbxproj @@ -3,7 +3,7 @@ archiveVersion = 1; classes = { }; - objectVersion = 46; + objectVersion = 50; objects = { /* Begin PBXBuildFile section */ @@ -156,7 +156,7 @@ 97C146E61CF9000F007C117D /* Project object */ = { isa = PBXProject; attributes = { - LastUpgradeCheck = 1020; + LastUpgradeCheck = 1300; ORGANIZATIONNAME = ""; TargetAttributes = { 97C146ED1CF9000F007C117D = { diff --git a/example/ios/Runner.xcodeproj/project.xcworkspace/contents.xcworkspacedata b/example/ios/Runner.xcodeproj/project.xcworkspace/contents.xcworkspacedata index 1d526a1..919434a 100644 --- a/example/ios/Runner.xcodeproj/project.xcworkspace/contents.xcworkspacedata +++ b/example/ios/Runner.xcodeproj/project.xcworkspace/contents.xcworkspacedata @@ -2,6 +2,6 @@ + location = "self:"> diff --git a/example/ios/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme b/example/ios/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme index a28140c..3db53b6 100644 --- a/example/ios/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme +++ b/example/ios/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme @@ -1,6 +1,6 @@ UIViewControllerBasedStatusBarAppearance + CADisableMinimumFrameDurationOnPhone + diff --git a/example/lib/main.dart b/example/lib/main.dart index 6351d63..a320708 100644 --- a/example/lib/main.dart +++ b/example/lib/main.dart @@ -1,5 +1,5 @@ // -// Copyright 2020-2021 Picovoice Inc. +// Copyright 2020-2023 Picovoice Inc. // // You may not use this file except in compliance with the license. A copy of the license is located in the "LICENSE" // file accompanying this source. @@ -9,10 +9,12 @@ // specific language governing permissions and limitations under the License. // -import 'package:flutter/material.dart'; import 'dart:async'; +import 'dart:math'; +import 'package:flutter/material.dart'; import 'package:flutter/services.dart'; import 'package:flutter_voice_processor/flutter_voice_processor.dart'; +import 'package:flutter_voice_processor_example/vu_meter_painter.dart'; void main() { runApp(MyApp()); @@ -24,12 +26,17 @@ class MyApp extends StatefulWidget { } class _MyAppState extends State { + final int frameLength = 512; + final int sampleRate = 16000; + final int volumeHistoryCapacity = 5; + final double dbOffset = 50.0; + + List _volumeHistory = []; + double _smoothedVolumeValue = 0.0; bool _isButtonDisabled = false; bool _isProcessing = false; + String? _errorMessage; VoiceProcessor? _voiceProcessor; - Function? _removeListener; - Function? _removeListener2; - Function? _errorListener; @override void initState() { @@ -38,7 +45,7 @@ class _MyAppState extends State { } void _initVoiceProcessor() async { - _voiceProcessor = VoiceProcessor.getVoiceProcessor(512, 16000); + _voiceProcessor = VoiceProcessor.instance; } Future _startProcessing() async { @@ -46,20 +53,44 @@ class _MyAppState extends State { _isButtonDisabled = true; }); - _removeListener = _voiceProcessor?.addListener(_onBufferReceived); - _removeListener2 = _voiceProcessor?.addListener(_onBufferReceived2); - _errorListener = _voiceProcessor?.addErrorListener(_onErrorReceived); + VoiceProcessorFrameListener frameListener = (List frame) { + double volumeLevel = calculateVolumeLevel(frame); + if (_volumeHistory.length == volumeHistoryCapacity) { + _volumeHistory.removeAt(0); + } + _volumeHistory.add(volumeLevel); + + this.setState(() { + _smoothedVolumeValue = + _volumeHistory.reduce((a, b) => a + b) / _volumeHistory.length; + }); + }; + + VoiceProcessorErrorListener errorListener = + (VoiceProcessorException error) { + this.setState(() { + _errorMessage = error.message; + }); + }; + + _voiceProcessor?.addFrameListener(frameListener); + _voiceProcessor?.addErrorListener(errorListener); try { if (await _voiceProcessor?.hasRecordAudioPermission() ?? false) { - await _voiceProcessor?.start(); + await _voiceProcessor?.start(frameLength, sampleRate); + bool? isRecording = await _voiceProcessor?.isRecording(); this.setState(() { - _isProcessing = true; + _isProcessing = isRecording!; }); } else { - print("Recording permission not granted"); + this.setState(() { + _errorMessage = "Recording permission not granted"; + }); } } on PlatformException catch (ex) { - print("Failed to start recorder: " + ex.toString()); + this.setState(() { + _errorMessage = "Failed to start recorder: " + ex.toString(); + }); } finally { this.setState(() { _isButtonDisabled = false; @@ -67,17 +98,16 @@ class _MyAppState extends State { } } - void _onBufferReceived(dynamic eventData) { - print("Listener 1 received buffer of size ${eventData.length}!"); - } - - void _onBufferReceived2(dynamic eventData) { - print("Listener 2 received buffer of size ${eventData.length}!"); - } + double calculateVolumeLevel(List frame) { + double rms = 0.0; + for (int sample in frame) { + rms += pow(sample, 2); + } + rms = sqrt(rms / frame.length); - void _onErrorReceived(dynamic eventData) { - String errorMsg = eventData as String; - print(errorMsg); + double dbfs = 20 * log(rms / 32767.0) / log(10); + double normalizedValue = (dbfs + dbOffset) / dbOffset; + return normalizedValue.clamp(0.0, 1.0); } Future _stopProcessing() async { @@ -85,15 +115,19 @@ class _MyAppState extends State { _isButtonDisabled = true; }); - await _voiceProcessor?.stop(); - _removeListener?.call(); - _removeListener2?.call(); - _errorListener?.call(); - - this.setState(() { - _isButtonDisabled = false; - _isProcessing = false; - }); + try { + await _voiceProcessor?.stop(); + } on PlatformException catch (ex) { + this.setState(() { + _errorMessage = "Failed to stop recorder: " + ex.toString(); + }); + } finally { + bool? isRecording = await _voiceProcessor?.isRecording(); + this.setState(() { + _isButtonDisabled = false; + _isProcessing = isRecording!; + }); + } } void _toggleProcessing() async { @@ -104,6 +138,8 @@ class _MyAppState extends State { } } + Color picoBlue = Color.fromRGBO(55, 125, 255, 1); + @override Widget build(BuildContext context) { return MaterialApp( @@ -111,18 +147,67 @@ class _MyAppState extends State { appBar: AppBar( title: const Text('Voice Processor'), ), - body: Center( - child: _buildToggleProcessingButton(), - ), + body: Column(children: [ + buildVuMeter(context), + buildStartButton(context), + buildErrorMessage(context) + ]), ), ); } - Widget _buildToggleProcessingButton() { - return new ElevatedButton( - onPressed: _isButtonDisabled ? null : _toggleProcessing, - child: Text(_isProcessing ? "Stop" : "Start", - style: TextStyle(fontSize: 20)), + buildVuMeter(BuildContext context) { + return Expanded( + flex: 2, + child: LayoutBuilder( + builder: (BuildContext context, BoxConstraints constraints) { + return Container( + alignment: Alignment.bottomCenter, + child: CustomPaint( + painter: VuMeterPainter(_smoothedVolumeValue, picoBlue), + size: Size(constraints.maxWidth * 0.95, 50))); + }, + )); + } + + buildStartButton(BuildContext context) { + final ButtonStyle buttonStyle = ElevatedButton.styleFrom( + primary: picoBlue, + shape: CircleBorder(), + textStyle: TextStyle(color: Colors.white)); + + return Expanded( + flex: 2, + child: Container( + child: SizedBox( + width: 150, + height: 150, + child: ElevatedButton( + style: buttonStyle, + onPressed: _isButtonDisabled || _errorMessage != null + ? null + : _toggleProcessing, + child: Text(_isProcessing ? "Stop" : "Start", + style: TextStyle(fontSize: 30)), + ))), ); } + + buildErrorMessage(BuildContext context) { + return Expanded( + flex: 1, + child: Container( + alignment: Alignment.center, + margin: EdgeInsets.all(30), + decoration: _errorMessage == null + ? null + : BoxDecoration( + color: Colors.red, borderRadius: BorderRadius.circular(5)), + child: _errorMessage == null + ? null + : Text( + _errorMessage!, + style: TextStyle(color: Colors.white, fontSize: 20), + ))); + } } diff --git a/example/lib/vu_meter_painter.dart b/example/lib/vu_meter_painter.dart new file mode 100644 index 0000000..8071ef9 --- /dev/null +++ b/example/lib/vu_meter_painter.dart @@ -0,0 +1,32 @@ +// +// Copyright 2023 Picovoice Inc. +// +// You may not use this file except in compliance with the license. A copy of the license is located in the "LICENSE" +// file accompanying this source. +// +// Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on +// an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the +// specific language governing permissions and limitations under the License. +// + +import 'package:flutter/material.dart'; + +class VuMeterPainter extends CustomPainter { + final double vuValue; + final Color fillColor; + + VuMeterPainter(this.vuValue, this.fillColor); + + @override + void paint(Canvas canvas, Size size) { + canvas.drawRect(Rect.fromLTWH(0, 0, size.width, size.height), + Paint()..color = Colors.grey); + canvas.drawRect(Rect.fromLTWH(0, 0, size.width * vuValue, size.height), + Paint()..color = fillColor); + } + + @override + bool shouldRepaint(CustomPainter oldDelegate) { + return true; + } +} diff --git a/example/pubspec.lock b/example/pubspec.lock index 90bc599..eb0c20c 100644 --- a/example/pubspec.lock +++ b/example/pubspec.lock @@ -43,13 +43,6 @@ packages: url: "https://pub.dartlang.org" source: hosted version: "1.16.0" - cupertino_icons: - dependency: "direct main" - description: - name: cupertino_icons - url: "https://pub.dartlang.org" - source: hosted - version: "1.0.0" fake_async: dependency: transitive description: @@ -73,7 +66,7 @@ packages: path: ".." relative: true source: path - version: "1.0.10" + version: "1.1.0" matcher: dependency: transitive description: diff --git a/example/pubspec.yaml b/example/pubspec.yaml index 0bfe633..4e43cdb 100644 --- a/example/pubspec.yaml +++ b/example/pubspec.yaml @@ -1,9 +1,7 @@ name: flutter_voice_processor_example description: Demonstrates how to use the flutter_voice_processor plugin. -# The following line prevents the package from being accidentally published to -# pub.dev using `pub publish`. This is preferred for private packages. -publish_to: 'none' # Remove this line if you wish to publish to pub.dev +publish_to: 'none' environment: sdk: ">=2.12.0 <3.0.0" @@ -13,59 +11,11 @@ dependencies: sdk: flutter flutter_voice_processor: - # When depending on this package from a real application you should use: - # flutter_voice_processor: ^x.y.z - # See https://dart.dev/tools/pub/dependencies#version-constraints - # The example app is bundled with the plugin so we use a path dependency on - # the parent directory to use the current plugin's version. path: ../ - # The following adds the Cupertino Icons font to your application. - # Use with the CupertinoIcons class for iOS style icons. - cupertino_icons: ^1.0.0 - dev_dependencies: flutter_test: sdk: flutter -# For information on the generic Dart part of this file, see the -# following page: https://dart.dev/tools/pub/pubspec - -# The following section is specific to Flutter. flutter: - - # The following line ensures that the Material Icons font is - # included with your application, so that you can use the icons in - # the material Icons class. - uses-material-design: true - - # To add assets to your application, add an assets section, like this: - # assets: - # - images/a_dot_burr.jpeg - # - images/a_dot_ham.jpeg - - # An image asset can refer to one or more resolution-specific "variants", see - # https://flutter.dev/assets-and-images/#resolution-aware. - - # For details regarding adding assets from package dependencies, see - # https://flutter.dev/assets-and-images/#from-packages - - # To add custom fonts to your application, add a fonts section here, - # in this "flutter" section. Each entry in this list should have a - # "family" key with the font family name, and a "fonts" key with a - # list giving the asset and other descriptors for the font. For - # example: - # fonts: - # - family: Schyler - # fonts: - # - asset: fonts/Schyler-Regular.ttf - # - asset: fonts/Schyler-Italic.ttf - # style: italic - # - family: Trajan Pro - # fonts: - # - asset: fonts/TrajanPro.ttf - # - asset: fonts/TrajanPro_Bold.ttf - # weight: 700 - # - # For details regarding fonts from package dependencies, - # see https://flutter.dev/custom-fonts/#from-packages + uses-material-design: true \ No newline at end of file diff --git a/ios/Classes/SwiftFlutterVoiceProcessorPlugin.swift b/ios/Classes/SwiftFlutterVoiceProcessorPlugin.swift index 21aff11..372eebb 100644 --- a/ios/Classes/SwiftFlutterVoiceProcessorPlugin.swift +++ b/ios/Classes/SwiftFlutterVoiceProcessorPlugin.swift @@ -1,5 +1,5 @@ // -// Copyright 2020-2021 Picovoice Inc. +// Copyright 2020-2023 Picovoice Inc. // // You may not use this file except in compliance with the license. A copy of the license is located in the "LICENSE" // file accompanying this source. @@ -13,14 +13,16 @@ import Flutter import UIKit import AVFoundation +import ios_voice_processor + public class SwiftFlutterVoiceProcessorPlugin: NSObject, FlutterPlugin, FlutterStreamHandler { + private let voiceProcessor = VoiceProcessor.instance; + private var settingsTimer: Timer? private var settingsLock = NSLock() - private var bufferEventSink: FlutterEventSink? + private var frameEventSink: FlutterEventSink? private var errorEventSink: FlutterEventSink? - private let audioInputEngine: AudioInputEngine = AudioInputEngine() - private var isListening = false private var isSettingsErrorReported = false public static func register(with registrar: FlutterPluginRegistrar) { @@ -29,33 +31,43 @@ public class SwiftFlutterVoiceProcessorPlugin: NSObject, FlutterPlugin, FlutterS let methodChannel = FlutterMethodChannel(name: "flutter_voice_processor_methods", binaryMessenger: registrar.messenger()) registrar.addMethodCallDelegate(instance, channel: methodChannel) - let eventChannel = FlutterEventChannel(name: "flutter_voice_processor_events", binaryMessenger: registrar.messenger()) - eventChannel.setStreamHandler(instance) + let frameEventChannel = FlutterEventChannel(name: "flutter_voice_processor_frame_events", binaryMessenger: registrar.messenger()) + frameEventChannel.setStreamHandler(instance) let errorEventChannel = FlutterEventChannel(name: "flutter_voice_processor_error_events", binaryMessenger: registrar.messenger()) errorEventChannel.setStreamHandler(instance) } + + public override init() { + super.init() + voiceProcessor.addFrameListener(VoiceProcessorFrameListener({ frame in + self.frameEventSink?(Array(frame)) + })) + + voiceProcessor.addErrorListener(VoiceProcessorErrorListener({ error in + self.errorEventSink?(error.errorDescription) + })) + } public func handle(_ call: FlutterMethodCall, result: @escaping FlutterResult) { switch call.method { case "start": - if let args = call.arguments as? [String : Any] { - if let frameLength = args["frameLength"] as? Int, - let sampleRate = args["sampleRate"] as? Int { - self.start(frameLength: frameLength, sampleRate: sampleRate, result: result) - } - else { - result(FlutterError(code: "PV_INVALID_ARGUMENT", message: "Invalid argument provided to VoiceProcessor.start", details: nil)) - } - } else { + guard let args = call.arguments as? [String : Any] else { result(FlutterError(code: "PV_INVALID_ARGUMENT", message: "Invalid argument provided to VoiceProcessor.start", details: nil)) + return } + guard let frameLength = args["frameLength"] as? UInt32, let sampleRate = args["sampleRate"] as? UInt32 else { + result(FlutterError(code: "PV_INVALID_ARGUMENT", message: "Invalid argument provided to VoiceProcessor.start", details: nil)) + return + } + + self.start(frameLength: frameLength, sampleRate: sampleRate, result: result) case "stop": - self.stop() - result(true) + self.stop(result:result) + case "isRecording": + result(self.voiceProcessor.isRecording) case "hasRecordAudioPermission": - let hasRecordAudioPermission:Bool = self.checkRecordAudioPermission() - result(hasRecordAudioPermission) + self.checkRecordAudioPermission(result:result) default: result(FlutterMethodNotImplemented) } @@ -63,8 +75,8 @@ public class SwiftFlutterVoiceProcessorPlugin: NSObject, FlutterPlugin, FlutterS public func onListen(withArguments arguments: Any?, eventSink events: @escaping FlutterEventSink) -> FlutterError? { if let type = arguments as? String { - if type == "buffer" { - self.bufferEventSink = events + if type == "frame" { + self.frameEventSink = events } else if type == "error" { @@ -76,8 +88,8 @@ public class SwiftFlutterVoiceProcessorPlugin: NSObject, FlutterPlugin, FlutterS public func onCancel(withArguments arguments: Any?) -> FlutterError? { if let type = arguments as? String { - if type == "buffer" { - self.bufferEventSink = nil + if type == "frame" { + self.frameEventSink = nil } else if type == "error" { self.errorEventSink = nil @@ -86,35 +98,13 @@ public class SwiftFlutterVoiceProcessorPlugin: NSObject, FlutterPlugin, FlutterS return nil } - public func start(frameLength: Int, sampleRate: Int, result: @escaping FlutterResult) -> Void { - - guard !isListening else { - NSLog("Audio engine already running.") - result(true) - return - } - - audioInputEngine.audioInput = { [weak self] audio in - - guard let `self` = self else { - return - } - - let buffer = UnsafeBufferPointer(start: audio, count: frameLength); - self.bufferEventSink?(Array(buffer)) - } - - do{ + public func start(frameLength: UInt32, sampleRate: UInt32, result: @escaping FlutterResult) -> Void { - try AVAudioSession.sharedInstance().setCategory( - AVAudioSession.Category.playAndRecord, - options: [.mixWithOthers, .allowBluetooth, .defaultToSpeaker]) - try AVAudioSession.sharedInstance().setActive(true) - try audioInputEngine.start(frameLength:frameLength, sampleRate:sampleRate) + do { + try voiceProcessor.start(frameLength: frameLength, sampleRate: sampleRate) } - catch{ - NSLog("Unable to start audio engine: \(error)"); - result(FlutterError(code: "PV_AUDIO_RECORDER_ERROR", message: "Unable to start audio engine: \(error)", details: nil)) + catch { + result(FlutterError(code: "PV_AUDIO_RECORDER_ERROR", message: "Unable to start audio recording: \(error)", details: nil)) return } @@ -125,16 +115,16 @@ public class SwiftFlutterVoiceProcessorPlugin: NSObject, FlutterPlugin, FlutterS userInfo: nil, repeats: true) isSettingsErrorReported = false - isListening = true + result(true) } @objc func monitorSettings() { settingsLock.lock() - if isListening && AVAudioSession.sharedInstance().category != AVAudioSession.Category.playAndRecord { + if voiceProcessor.isRecording && AVAudioSession.sharedInstance().category != AVAudioSession.Category.playAndRecord { if !isSettingsErrorReported { - errorEventSink?("ERROR: Audio settings have been changed and Picovoice is no longer receiving microphone audio.") + errorEventSink?("Audio settings have been changed and Picovoice is no longer receiving microphone audio.") isSettingsErrorReported = true } } @@ -142,106 +132,25 @@ public class SwiftFlutterVoiceProcessorPlugin: NSObject, FlutterPlugin, FlutterS settingsLock.unlock() } - private func stop() -> Void{ - guard isListening else { - return - } - - self.audioInputEngine.stop() + private func stop(result: @escaping FlutterResult) -> Void { do { - try AVAudioSession.sharedInstance().setActive(false, options: .notifyOthersOnDeactivation) - } - catch { - errorEventSink?("Unable to stop audio engine: \(error)") + try voiceProcessor.stop() + } catch { + result(FlutterError(code: "PV_AUDIO_RECORDER_ERROR", message: "Unable to stop audio recording: \(error)", details: nil)) return } - settingsTimer?.invalidate() isSettingsErrorReported = false - isListening = false - } - - private func checkRecordAudioPermission() -> Bool{ - return AVAudioSession.sharedInstance().recordPermission != .denied + result(true) } - - private class AudioInputEngine { - - private let numBuffers = 3 - private var audioQueue: AudioQueueRef? - private var bufferRef: AudioQueueBufferRef? - private var started = false - - var audioInput: ((UnsafePointer) -> Void)? - - func start(frameLength:Int, sampleRate:Int) throws { - if started { - return - } - - var format = AudioStreamBasicDescription( - mSampleRate: Float64(sampleRate), - mFormatID: kAudioFormatLinearPCM, - mFormatFlags: kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked, - mBytesPerPacket: 2, - mFramesPerPacket: 1, - mBytesPerFrame: 2, - mChannelsPerFrame: 1, - mBitsPerChannel: 16, - mReserved: 0) - let userData = UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque()) - AudioQueueNewInput(&format, createAudioQueueCallback(), userData, nil, nil, 0, &audioQueue) - - guard let queue = audioQueue else { - return - } - - let bufferSize = UInt32(frameLength) * 2 - for _ in 0.. AudioQueueInputCallback { - return { userData, queue, bufferRef, startTimeRef, numPackets, packetDescriptions in - - // `self` is passed in as userData in the audio queue callback. - guard let userData = userData else { - return - } - let `self` = Unmanaged.fromOpaque(userData).takeUnretainedValue() - - let pcm = bufferRef.pointee.mAudioData.assumingMemoryBound(to: Int16.self) - - if let audioInput = self.audioInput { - audioInput(pcm) - } - - AudioQueueEnqueueBuffer(queue, bufferRef, 0, nil) - } + private func checkRecordAudioPermission(result: @escaping FlutterResult) -> Void { + if VoiceProcessor.hasRecordAudioPermission { + result(true) + } else { + VoiceProcessor.requestRecordAudioPermission({ isGranted in + result(isGranted) + }) } - } - } diff --git a/ios/flutter_voice_processor.podspec b/ios/flutter_voice_processor.podspec index 309d0ae..530861a 100644 --- a/ios/flutter_voice_processor.podspec +++ b/ios/flutter_voice_processor.podspec @@ -4,10 +4,11 @@ # Pod::Spec.new do |s| s.name = 'flutter_voice_processor' - s.version = '1.0.6' - s.summary = 'A Flutter package plugin for real-time voice processing.' + s.version = '1.1.0' + s.summary = 'A Flutter audio recording plugin designed for real-time speech audio processing.' s.description = <<-DESC - A Flutter package plugin for real-time voice processing. + The Flutter Voice Processor is an asynchronous audio capture library designed for real-time audio processing. + Given some specifications, the library delivers frames of raw audio data to the user via listeners. DESC s.homepage = 'https://picovoice.ai/' s.license = { :type => 'Apache-2.0' } @@ -15,7 +16,8 @@ Pod::Spec.new do |s| s.source = { :git => "https://github.com/Picovoice/flutter-voice-processor.git" } s.source_files = 'Classes/**/*' s.dependency 'Flutter' - s.platform = :ios, '9.0' + s.dependency 'ios-voice-processor', '~> 1.1.0' + s.platform = :ios, '11.0' # Flutter.framework does not contain a i386 slice. s.pod_target_xcconfig = { 'DEFINES_MODULE' => 'YES', 'EXCLUDED_ARCHS[sdk=iphonesimulator*]' => 'i386' } diff --git a/lib/flutter_voice_processor.dart b/lib/flutter_voice_processor.dart index be7dc48..85c05dd 100644 --- a/lib/flutter_voice_processor.dart +++ b/lib/flutter_voice_processor.dart @@ -1,5 +1,5 @@ // -// Copyright 2020-2021 Picovoice Inc. +// Copyright 2020-2023 Picovoice Inc. // // You may not use this file except in compliance with the license. A copy of the license is located in the "LICENSE" // file accompanying this source. @@ -13,89 +13,155 @@ import 'dart:async'; import 'package:flutter/services.dart'; -typedef void EventListener(dynamic buffer); +/// Exception class for errors related to the VoiceProcessor. +class VoiceProcessorException implements Exception { + /// The error message associated with the exception. + final String? message; -typedef void RemoveListener(); + /// Creates a VoiceProcessorException with an optional error [message]. + VoiceProcessorException([this.message]); +} + +/// Type for callback functions that receive audio frames from the VoiceProcessor. +typedef void VoiceProcessorFrameListener(List frame); +/// Type for callbacks that receive errors from the VoiceProcessor. +typedef void VoiceProcessorErrorListener(VoiceProcessorException error); + +/// An audio capture library designed for real-time speech audio processing +/// on mobile devices. Given some specifications, the library delivers +/// frames of raw audio data to the user via listeners. class VoiceProcessor { static VoiceProcessor? _instance; - int _frameLength; - int _sampleRate; - Stream? _bufferEventStream; + Stream? _frameEventStream; Stream? _errorEventStream; - bool _isRecording = false; - - bool get isRecording => this._isRecording; - final MethodChannel _channel = const MethodChannel('flutter_voice_processor_methods'); - final EventChannel _eventChannel = - const EventChannel('flutter_voice_processor_events'); + final EventChannel _frameEventsChannel = + const EventChannel('flutter_voice_processor_frame_events'); final EventChannel _errorEventsChannel = const EventChannel('flutter_voice_processor_error_events'); - VoiceProcessor._(this._frameLength, this._sampleRate) { - _bufferEventStream = _eventChannel.receiveBroadcastStream("buffer"); + List _frameListeners = []; + List _errorListeners = []; + + void _onFrame(List frame) { + for (VoiceProcessorFrameListener frameListener in _frameListeners) { + frameListener(frame); + } + } + + void _onError(String errorMessage) { + if (_errorListeners.isNotEmpty) { + for (VoiceProcessorErrorListener errorListener in _errorListeners) { + errorListener(VoiceProcessorException(errorMessage)); + } + } else { + print("VoiceProcessorException: " + errorMessage); + } + } + + /// Private constructor for VoiceProcessor. + VoiceProcessor._() { + _frameEventStream = _frameEventsChannel.receiveBroadcastStream("frame"); + _frameEventStream?.listen((event) { + try { + List frame = (event as List).cast(); + _onFrame(frame); + } on Error { + _onError("VoiceProcessorException: Failed to cast incoming frame data"); + } + }, cancelOnError: true); + _errorEventStream = _errorEventsChannel.receiveBroadcastStream("error"); + _errorEventStream?.listen((event) { + try { + String error = event as String; + _onError(error); + } on Error { + _onError( + "VoiceProcessorException: Unable to cast incoming error event."); + } + }, cancelOnError: true); } - /// Singleton getter for VoiceProcessor that delivers frames of size - /// [frameLenth] and at a sample rate of [sampleRate] - static getVoiceProcessor(int frameLength, int sampleRate) { + /// Singleton instance of VoiceProcessor + static VoiceProcessor? get instance { if (_instance == null) { - _instance = new VoiceProcessor._(frameLength, sampleRate); - } else { - _instance?._frameLength = frameLength; - _instance?._sampleRate = sampleRate; + _instance = new VoiceProcessor._(); } return _instance; } - /// Add a [listener] function that triggers every time the VoiceProcessor - /// delivers a frame of audio - RemoveListener addListener(EventListener listener) { - var subscription = - _bufferEventStream?.listen(listener, cancelOnError: true); - return () { - subscription?.cancel(); - }; + /// Gets the number of registered frame listeners. + int get numFrameListeners => _frameListeners.length; + + /// Gets the number of registered error listeners. + int get numErrorListeners => _errorListeners.length; + + /// Adds a new listener that receives audio frames. + void addFrameListener(VoiceProcessorFrameListener listener) { + _frameListeners.add(listener); + } + + /// Adds a list of listeners that receive audio frames. + void addFrameListeners(List listeners) { + _frameListeners.addAll(listeners); } - /// Add an [errorListener] function that triggers when a the native audio - /// recorder encounters an error - RemoveListener addErrorListener(EventListener errorListener) { - var subscription = - _errorEventStream?.listen(errorListener, cancelOnError: true); - return () { - subscription?.cancel(); - }; + /// Removes a previously added frame listener. + void removeFrameListener(VoiceProcessorFrameListener listener) { + _frameListeners.remove(listener); } - /// Starts audio recording - /// throws [PlatformError] if native audio engine doesn't start - Future start() async { - if (_isRecording) { - return; + /// Removes previously added frame listeners. + void removeFrameListeners(List listeners) { + for (VoiceProcessorFrameListener listener in listeners) { + _frameListeners.remove(listener); } + } + + /// Removes all frame listeners. + void clearFrameListeners() { + _frameListeners.clear(); + } + + /// Adds a new error listener. + void addErrorListener(VoiceProcessorErrorListener errorListener) { + _errorListeners.add(errorListener); + } + + /// Removes a previously added error listener. + void removeErrorListener(VoiceProcessorErrorListener errorListener) { + _errorListeners.remove(errorListener); + } + + /// Removes all error listeners. + void clearErrorListeners() { + _errorListeners.clear(); + } + + /// Starts audio recording with the given [sampleRate] and delivering audio + /// frames of size [frameLength] to registered frame listeners. + Future start(int frameLength, int sampleRate) async { await _channel.invokeMethod('start', { - 'frameLength': _frameLength, - 'sampleRate': _sampleRate + 'frameLength': frameLength, + 'sampleRate': sampleRate }); - _isRecording = true; } - /// Stops audio recording + /// Stops audio recording. Future stop() async { - if (!_isRecording) { - return; - } await _channel.invokeMethod('stop'); - _isRecording = false; } - /// Checks if user has granted recording permission and - /// asks for it if they haven't + /// Checks if audio recording is currently in progress. + Future isRecording() async { + return _channel.invokeMethod('isRecording'); + } + + /// Checks if the app has permission to record audio and prompts user if not. Future hasRecordAudioPermission() { return _channel.invokeMethod('hasRecordAudioPermission'); } diff --git a/pubspec.yaml b/pubspec.yaml index cefac80..4074f26 100644 --- a/pubspec.yaml +++ b/pubspec.yaml @@ -1,6 +1,6 @@ name: flutter_voice_processor -description: A Flutter plugin that delivers audio buffers for real-time processing. -version: 1.0.10 +description: A Flutter audio recording plugin designed for real-time speech audio processing. +version: 1.1.0 homepage: https://picovoice.ai/ repository: https://github.com/Picovoice/flutter-voice-processor