Welcome toVigges Developer Community-Open, Learning,Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.8k views
in Technique[技术] by (71.8m points)

objective c - could NaN be causing the occasional crash in this core audio iOS app?

My first app synthesised music audio from a sine look-up table using methods deprecated since iOS 6. I have just revised it to address warnings about AudioSessionhelped by this blog and the Apple guidelines on AVFoundationFramework. Audio Session warnings have now been addressed and the app produces audio as it did before. It currently runs under iOS 9.

However the app occasionally crashes for no apparent reason. I checked out this SO post but it seems to deal with accessing rather than generating raw audio data, so maybe it is not dealing with a timing issue. I suspect there is a buffering problem but I need to understand what this might be before I change or fine tune anything in the code.

I have a deadline to make the revised app available to users so I'd be most grateful to hear from someone who has dealt a similar issue.

Here is the issue. The app goes into debug on the simulator reporting:

com.apple.coreaudio.AQClient (8):EXC_BAD_ACCESS (code=1, address=0xffffffff10626000)

In the Debug Navigator, Thread 8 (com.apple.coreaudio.AQClient (8)), it reports:

    0 -[Synth fillBuffer:frames:]
    1 -[PlayView audioBufferPlayer:fillBuffer:format:]
    2 playCallback

This line of code in fillBuffer is highlighted

    float sineValue = (1.0f - b)*sine[a] + b*sine[c];

... and so is this line of code in audioBufferPlayer

    int packetsWritten = [synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];

... and playCallBack

    [player.delegate audioBufferPlayer:player fillBuffer:inBuffer format:player.audioFormat];

Here is the code for audioBufferPlayer (delegate, essentially the same as in the demo referred to above).

    - (void)audioBufferPlayer:(AudioBufferPlayer*)audioBufferPlayer fillBuffer:(AudioQueueBufferRef)buffer format:(AudioStreamBasicDescription)audioFormat            
    {
    [synthLock lock];
    int packetsPerBuffer = buffer->mAudioDataBytesCapacity / audioFormat.mBytesPerPacket;
    int packetsWritten = [synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];
    buffer->mAudioDataByteSize = packetsWritten * audioFormat.mBytesPerPacket;    
    [synthLock unlock];

    }

... (initialised in myViewController)

- (id)init
{    
    if ((self = [super init])) {
    // The audio buffer is managed (filled up etc.) within its own thread (Audio Queue thread)
    // Since we are also responding to changes from the GUI, we need a lock so both threads
    // do not attempt to change the same value independently.

        synthLock = [[NSLock alloc] init];

    // Synth and the AudioBufferPlayer must use the same sample rate.

        float sampleRate = 44100.0f;

    // Initialise synth to fill the audio buffer with audio samples.

        synth = [[Synth alloc] initWithSampleRate:sampleRate];

    // Initialise note buttons

        buttons = [[NSMutableArray alloc] init];

    // Initialise the audio buffer.

        player = [[AudioBufferPlayer alloc] initWithSampleRate:sampleRate channels:1 bitsPerChannel:16 packetsPerBuffer:1024];
        player.delegate = self;
        player.gain = 0.9f;
        [[AVAudioSession sharedInstance] setActive:YES error:nil];

    }
    return self;
}   // initialisation

... and for playCallback

static void playCallback( void* inUserData, AudioQueueRef inAudioQueue, AudioQueueBufferRef inBuffer)
{
    AudioBufferPlayer* player = (AudioBufferPlayer*) inUserData;
    if (player.playing){
        [player.delegate audioBufferPlayer:player fillBuffer:inBuffer format:player.audioFormat];
        AudioQueueEnqueueBuffer(inAudioQueue, inBuffer, 0, NULL);
    }
}

... and here is the code for fillBuffer where audio is synthesised

- (int)fillBuffer:(void*)buffer frames:(int)frames
{
    SInt16* p = (SInt16*)buffer;

    //  Loop through the frames (or "block size"), then consider each sample for each tone.

    for (int f = 0; f < frames; ++f)
    {
        float m = 0.0f;  // the mixed value for this frame

        for (int n = 0; n < MAX_TONE_EVENTS; ++n)
        {
            if (tones[n].state == STATE_INACTIVE)   // only active tones
                continue;

    // recalculate a 30sec envelope and place in a look-up table
    // Longer notes need to interpolate through the envelope

            int a   = (int)tones[n].envStep;        // integer part  (like a floored float)
            float b = tones[n].envStep - a;         // decimal part  (like doing a modulo)

        // c allows us to calculate if we need to wrap around

            int c = a + 1;                          // (like a ceiling of integer part)
            if (c >= envLength) c = a;              // don't wrap around

    /////////////// LOOK UP ENVELOPE TABLE /////////////////

    //  uses table look-up with interpolation for both level and pitch envelopes
    //  'b' is a value interpolated between 2 successive samples 'a' and 'c')            
    //  first, read values for the level envelope

            float envValue = (1.0f - b)*tones[n].levelEnvelope[a] + b*tones[n].levelEnvelope[c];

    //  then the pitch envelope

            float pitchFactorValue = (1.0f - b)*tones[n].pitchEnvelope[a] + b*tones[n].pitchEnvelope[c];

    //  Advance envelope pointer one step

            tones[n].envStep += tones[n].envDelta;

    //  Turn note off at the end of the envelope.
            if (((int)tones[n].envStep) >= envLength){
                tones[n].state = STATE_INACTIVE;
                continue;
            }

        //  Precalculated Sine look-up table            
            a = (int)tones[n].phase;                    // integer part
            b = tones[n].phase - a;                     // decimal part
            c = a + 1;
            if (c >= sineLength) c -= sineLength;       // wrap around

    ///////////////// LOOK UP OF SINE TABLE ///////////////////

            float sineValue = (1.0f - b)*sine[a] + b*sine[c];

    // Wrap round when we get to the end of the sine look-up table.

            tones[n].phase += (tones[n].frequency * pitchFactorValue); // calculate frequency for each point in the pitch envelope
            if (((int)tones[n].phase) >= sineLength)
                tones[n].phase -= sineLength;

    ////////////////// RAMP NOTE OFF IF IT HAS BEEN UNPRESSED

            if (tones[n].state == STATE_UNPRESSED) {
                tones[n].gain -= 0.0001;                
            if ( tones[n].gain <= 0 ) {
                tones[n].state = STATE_INACTIVE;
                }
            }

    //////////////// FINAL SAMPLE VALUE ///////////////////

            float s = sineValue * envValue * gain * tones[n].gain;

    // Clip the signal, if needed.

            if (s > 1.0f) s = 1.0f;
            else if (s < -1.0f) s = -1.0f;

    // Add the sample to the out-going signal   
        m += s;
        }

    // Write the sample mix to the buffer as a 16-bit word. 
    p[f] = (SInt16)(m * 0x7FFF);
    }
return frames;
}

I'm not sure whether it is a red herring but I came across NaN in several debug registers. It appears to happen while calculating phase increment for sine lookup in fillBuffer (see above). That calculation is done for up to a dozen partials every sample at a sampling rate of 44.1 kHz and worked in iOS 4 on an iPhone 4. I'm running on simulator of iOS 9. The only changes I made are described in this post!

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

My NaN problem turned out to have nothing directly to do with Core Audio. It was caused by an edge condition introduced by changes in another area of my code. The real problem was a division by zero attempted while calculating the duration of the sound envelope in realtime.

However, in trying to identify the cause of that problem, I am confident my pre-iOS 7 Audio Session has been replaced by a working setup based on AVFoundation. Thanks goes to the source of my initial code Matthijs Hollemans and also to Mario Diana whose blog explained the changes needed.

At first, the sound levels on my iPhone were significantly less than the sound levels on the Simulator, a problem addressed here by foundry. I found it necessary to include these improvements by replacing Mario's

    - (BOOL)setUpAudioSession

with foundry's

    - (void)configureAVAudioSession

Hopefully this might help someone else.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to Vigges Developer Community for programmer and developer-Open, Learning and Share
...