AudioTimeStamp format + 'MusicDeviceMIDIEvent'
-
05-12-2019 - |
質問
Can I get a little help with this?
In a test project, I have an AUSampler -> MixerUnit -> ioUnit
and have a render callback set up. It all works. I am using the MusicDeviceMIDIEvent
method as defined in MusicDevice.h
to play a midi noteOn & noteOff. So in the hack test code below, a noteOn occurs for .5 sec. every 2 seconds.
MusicDeviceMIDIEvent
(below) takes a param: inOffsetSampleFrame
in order to schedule an event at a future time. What I would like to be able to do is play a noteOn and schedule the noteOff at the same time (without the hack time check I am doing below). I just don't understand what the inOffsetSampleFrame
value should be (ex: to play a .5 sec or .2 second note. (in other words, I don't understand the basics of audio timing...).
So, if someone could walk me through the arithmetic to get proper values from the incoming AudioTimeStamp
, that would be great! Also perhaps correct me/clarify any of these:
AudioTimeStamp->mSampleTime
- sampleTime is the time of the current sample "slice"? Is this in milliseconds?AudioTimeStamp->mHostTime
- ? host is the computer the app is running on and this is time (in milliseconds?) since computer started? This is a HUGE number. Doesn't it rollover and then cause problems?inNumberFrames
- seems like that is 512 on iOS5 (set throughkAudioUnitProperty_MaximumFramesPerSlice
). So the sample is made up of 512 frames?I've seen lots of admonitions not to overload the render Callback function - in particular to avoid Objective C calls - I understand the reason, but how does one then message the UI or do other processing?
I guess that's it. Thanks for bearing with me!
inOffsetSampleFrame If you are scheduling the MIDI Event from the audio unit's render thread, then you can supply a sample offset that the audio unit may apply when applying that event in its next audio unit render. This allows you to schedule to the sample, the time when a MIDI command is applied and is particularly important when starting new notes. If you are not scheduling in the audio unit's render thread, then you should set this value to 0
// MusicDeviceMIDIEvent function def:
extern OSStatus
MusicDeviceMIDIEvent( MusicDeviceComponent inUnit,
UInt32 inStatus,
UInt32 inData1,
UInt32 inData2,
UInt32 inOffsetSampleFrame)
//my callback
OSStatus MyCallback( void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
Float64 sampleTime = inTimeStamp->mSampleTime;
UInt64 hostTime = inTimeStamp->mHostTime;
[(__bridge Audio*)inRefCon audioEvent:sampleTime andHostTime:hostTime];
return 1;
}
// OBJ-C method
- (void)audioEvent:(Float64) sampleTime andHostTime:(UInt64)hostTime
{
OSStatus result = noErr;
Float64 nowTime = (sampleTime/self.graphSampleRate); // sample rate: 44100.0
if (nowTime - lastTime > 2) {
UInt32 noteCommand = kMIDIMessage_NoteOn << 4 | 0;
result = MusicDeviceMIDIEvent (mySynthUnit, noteCommand, 60, 120, 0);
lastTime = sampleTime/self.graphSampleRate;
}
if (nowTime - lastTime > .5) {
UInt32 noteCommand = kMIDIMessage_NoteOff << 4 | 0;
result = MusicDeviceMIDIEvent (mySynthUnit, noteCommand, 60, 0, 0);
}
}
解決
The answer here is that I misunderstood the purpose of inOffsetSampleFrame
despite it being aptly named. I thought I could use it to schedule a noteOff event at some arbitrary time in the future so I didn't have to manage noteOffs, but the scope of this is simply within the current sample frame. Oh well.