.Ensure compatibility with several frameworks, including.NET 6.0,. Web Structure 4.6.2, and.NET Standard 2.0 and also above.Lessen reliances to prevent variation disagreements and also the requirement for binding redirects.Transcribing Audio Files.Some of the key functions of the SDK is actually audio transcription. Developers can translate audio documents asynchronously or even in real-time. Below is actually an instance of how to record an audio data:.using AssemblyAI.making use of AssemblyAI.Transcripts.var client = brand-new AssemblyAIClient(" YOUR_API_KEY").var records = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For neighborhood data, comparable code could be used to achieve transcription.wait for making use of var stream = new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK additionally sustains real-time sound transcription utilizing Streaming Speech-to-Text. This component is actually specifically practical for applications calling for prompt handling of audio records.using AssemblyAI.Realtime.await using var scribe = new RealtimeTranscriber( brand new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Ultimate: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for getting sound coming from a microphone as an example.GetAudio( async (piece) => await transcriber.SendAudioAsync( piece)).wait for transcriber.CloseAsync().Using LeMUR for LLM Apps.The SDK incorporates along with LeMUR to enable designers to create large language version (LLM) functions on vocal records. Right here is actually an example:.var lemurTaskParams = new LemurTaskParams.Cause="Offer a brief review of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var reaction = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Cleverness Versions.Furthermore, the SDK comes with integrated support for audio intellect versions, enabling view analysis as well as other innovative functions.var transcript = await client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = accurate. ).foreach (var cause transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// FAVORABLE, NEUTRAL, or NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To learn more, visit the formal AssemblyAI blog.Image resource: Shutterstock.