epson audio outOUT2,LOW

Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free.
I'm working on an audio application, written in C.
I need to provide live audio playback under Windows.
I need to decide which audio API to use.
I'm planning to use , but I wanted to check to see what the community here recommends.
I want code that will Just Work on any recent version of Windows, with no need
and I want minimal latency.
I don't need or want any "effects", I just need to faithfully play whatever wave samples the application generates.
My understanding is that most of the professional audio applications on Windows use ASIO, which gives excellent low latency, but I don't want ASIO because I want my code to Just Work and most people don't have ASIO pre-installed on their computers.
(At a later date I may go back and also add ASIO as an option, but I'm going for the most general solution first.)
Is there anything out there that would be better than waveOut for my purposes, or is that the best choice?
30.8k55080
It depends on what you are trying to do.
The basic waveOut audio API is better for streaming audio.
It lets you queue up several buffers and have them automatically played in succession.
But if audio is playing and you want to change it, or add something to it, that's relatively hard.
DirectX audio is better for event based audio.
You can have several things playing at the same time without having to do the mixing yourself. You can add or remove little pieces of audio easily - like playing a sound when the user pulls the trigger on his gun. But streaming (i.e. playing 1 buffer after another) is harder.
waveOut is designed to facilitate playing audio that is constant, like a .mp3 file.
DirectX is designed for audio that is intermittent, like feedback in a game.
ASIO is like the worst of waveOut and DirectX in terms of difficulty of programming.
And it's not that stable.
Applications typically can't share the audio device.
But it gives you the lowest latency access to that audio hardware.
ASIO also gives you a way to synchronize playback on multiple devices.
If you don't need to be able to change what is going to be played right before it is played, and you don't need to synchronize multiple devices then you don't need ASIO.
22.4k22767
in addition to the options mentioned by John Knoeller, there is
which allows for much lower latencies than WaveOut, but unfortunately is only available from Windows Vista onwards.
26k1576141
At the time I asked this question, I wrote streaming code using the waveOut and waveIn APIs.
Since then, I have discovered a useful library:
PortAudio is free software with a commercial-friendly license.
If you write your code to call PortAudio it should be able to work with waveOut devices but also with ASIO devices under W it can be then recompiled for Linux and should work with ALSA and it can then be recompiled for the Mac and should work with CoreAudio devices.
I haven't tested the Mac part but my project is working great with Windows and Linux.
30.8k55080
Having written a DirectSound streaming application myself, I certainly recommend it for low-latency and ease of use. Also, it enables you to set a higher quality format for playback on legacy editions of Windows.
17.5k29139246
Your Answer
Sign up or
Sign up using Google
Sign up using Facebook
Sign up using Stack Exchange
Post as a guest
Post as a guest
By posting your answer, you agree to the
Not the answer you're looking for?
Browse other questions tagged
Top questions and answers
Important announcements
Unanswered questions
By subscribing, you agree to the
Stack Overflow works best with JavaScript enabledaudioout2 - 必应网络伴音信号输出2 1.伴音信号输出2创维8873CSBNG7A21引脚功能 - 已解决 - 搜搜问问 ... AUDIO OUT1 伴音信号输出1 AUDIO OUT2 伴音信号输出2 SIF IN 第二伴音输入 ... |© 2015 MicrosoftStack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free.
What are my options for playing simultaneous audio on an Android device with the least amount of latency?
Am I going to get anything half decent out of the canned SDK, or is that asking too much?
The documentation claims that the SoundPool class is capable of playing multiple sounds simultaneously with relatively good performance, but after running some tests in the emulator and on a physical device it seems pretty weak.
Is there maybe a trick to it, or do I have to go to a much lower level api for this kind of thing?
I've tried using a single sound pool with multiple samples loaded, and I've tried multiple sound pools each managing a single sample.
I'm preloading everything, so that when I attempt to play back I have no additional code being executed other than the call to SoundPool.play().
18.9k2080127
A number of people are interested in low-latency audio on Andriod. Here are the threads I'm following on the topic:
covering audio related topics from this year's I/O conference.
An Android
about support for low latency audio in the NDK.
suggests that Android devices have ALSA drivers (capable of low-latency audio)--but it seems that low-latency functionality isn't exposed to apps via the NDK.
I have no direct experience with Android, but from what I've read, low-latency (less than &10ms or so) isn't a reality yet. Please post any experience to the contrary!
I don't have any Android experience, but I've written similar things for Windows Mobile.
The devices themselves are certainly capable of mixing multiple sounds in real-time with a low latency (under 25 ms), although by "multiple" I mean maybe 4 or 5 (and not 30 to 40).
However, I was only able to achieve this satisfactorily by writing my own code that did the mixing internally and accessed the low-level audio playback API only for playing the final mixed output.
The higher-level ways of playing sounds in the .Net Compact Framework are theoretically capable of polyphony, but in practice they work horribly (lots of glitches, stuttering and distortion).
I suspect that the Android audio SDK has the same problem, so you may have to write your own.
52.4k28122259
Android 2.3 now supports native access to audio APIs (via OpenSL) for low latency applications.
However, not all hardware devices will have the a low latency audio feature profile. Thus applications requiring low latency audio shouuld filter devices not supporting it in the Android market by specifying the following in the manifest:
&uses-feature android:name="android.hardware.audio.low_latency"/&
Please see my answer to .
The latency of an Android device depends less on the API and more on the hardware and drivers for that particular device. Enabling low latency playback consumes more power and increases the chances of audio glitches, so many OEMs will enlarge their playback buffers intentionally.
protected by ♦
Thank you for your interest in this question.
Because it has attracted low-quality answers, posting an answer now requires 10
on this site.
Would you like to answer one of these
Not the answer you're looking for?
Browse other questions tagged
Stack Overflow works best with JavaScript enabled

我要回帖

更多关于 audio out 接口 的文章

 

随机推荐