Internet Engineering Task Force (IETF) D. Burnett Request for Comments: 6787 Voxeo Category: Standards Track S. Shanmugham ISSN: 2070-1721 Cisco Systems, Inc. November 2012
Media Resource Control Protocol Version 2 (MRCPv2)
Abstract
The Media Resource Control Protocol Version 2 (MRCPv2) allows client hosts to control media service resources such as speech synthesizers, recognizers, verifiers, and identifiers residing in servers on the network. MRCPv2 is not a "stand-alone" protocol -- it relies on other protocols, such as the Session Initiation Protocol (SIP), to coordinate MRCPv2 clients and servers and manage sessions between them, and the Session Description Protocol (SDP) to describe, discover, and exchange capabilities. It also depends on SIP and SDP to establish the media sessions and associated parameters between the media source or sink and the media server. Once this is done, the MRCPv2 exchange operates over the control session established above, allowing the client to control the media processing resources on the speech resource server.
Status of This Memo
This is an Internet Standards Track document.
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741.
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc6787.
Copyright Notice
Copyright (c) 2012 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents
Burnett & Shanmugham Standards Track [Page 1]
RFC 6787 MRCPv2 November 2012
carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
This document may contain material from IETF Documents or IETF Contributions published or made publicly available before November 10, 2008. The person(s) controlling the copyright in some of this material may not have granted the IETF Trust the right to allow modifications of such material outside the IETF Standards Process. Without obtaining an adequate license from the person(s) controlling the copyright in such materials, this document may not be modified outside the IETF Standards Process, and derivative works of it may not be created outside the IETF Standards Process, except to format it for publication as an RFC or to translate it into languages other than English.
MRCPv2 is designed to allow a client device to control media processing resources on the network. Some of these media processing resources include speech recognition engines, speech synthesis engines, speaker verification, and speaker identification engines. MRCPv2 enables the implementation of distributed Interactive Voice Response platforms using VoiceXML [W3C.REC-voicexml20-20040316] browsers or other client applications while maintaining separate back-end speech processing capabilities on specialized speech processing servers. MRCPv2 is based on the earlier Media Resource Control Protocol (MRCP) [RFC4463] developed jointly by Cisco Systems, Inc., Nuance Communications, and Speechworks, Inc. Although some of the method names are similar, the way in which these methods are communicated is different. There are also more resources and more methods for each resource. The first version of MRCP was essentially taken only as input to the development of this protocol. There is no expectation that an MRCPv2 client will work with an MRCPv1 server or vice versa. There is no migration plan or gateway definition between the two protocols.
The protocol requirements of Speech Services Control (SPEECHSC) [RFC4313] include that the solution be capable of reaching a media processing server, setting up communication channels to the media resources, and sending and receiving control messages and media streams to/from the server. The Session Initiation Protocol (SIP) [RFC3261] meets these requirements.
The proprietary version of MRCP ran over the Real Time Streaming Protocol (RTSP) [RFC2326]. At the time work on MRCPv2 was begun, the consensus was that this use of RTSP would break the RTSP protocol or cause backward-compatibility problems, something forbidden by Section 3.2 of [RFC4313]. This is the reason why MRCPv2 does not run over RTSP.
Burnett & Shanmugham Standards Track [Page 8]
RFC 6787 MRCPv2 November 2012
MRCPv2 leverages these capabilities by building upon SIP and the Session Description Protocol (SDP) [RFC4566]. MRCPv2 uses SIP to set up and tear down media and control sessions with the server. In addition, the client can use a SIP re-INVITE method (an INVITE dialog sent within an existing SIP session) to change the characteristics of these media and control session while maintaining the SIP dialog between the client and server. SDP is used to describe the parameters of the media sessions associated with that dialog. It is mandatory to support SIP as the session establishment protocol to ensure interoperability. Other protocols can be used for session establishment by prior agreement. This document only describes the use of SIP and SDP.
MRCPv2 uses SIP and SDP to create the speech client/server dialog and set up the media channels to the server. It also uses SIP and SDP to establish MRCPv2 control sessions between the client and the server for each media processing resource required for that dialog. The MRCPv2 protocol exchange between the client and the media resource is carried on that control session. MRCPv2 exchanges do not change the state of the SIP dialog, the media sessions, or other parameters of the dialog initiated via SIP. It controls and affects the state of the media processing resource associated with the MRCPv2 session(s).
MRCPv2 defines the messages to control the different media processing resources and the state machines required to guide their operation. It also describes how these messages are carried over a transport- layer protocol such as the Transmission Control Protocol (TCP) [RFC0793] or the Transport Layer Security (TLS) Protocol [RFC5246]. (Note: the Stream Control Transmission Protocol (SCTP) [RFC4960] is a viable transport for MRCPv2 as well, but the mapping onto SCTP is not described in this specification.)
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].
Since many of the definitions and syntax are identical to those for the Hypertext Transfer Protocol -- HTTP/1.1 [RFC2616], this specification refers to the section where they are defined rather than copying it. For brevity, [HX.Y] is to be taken to refer to Section X.Y of RFC 2616.
All the mechanisms specified in this document are described in both prose and an augmented Backus-Naur form (ABNF [RFC5234]).
Burnett & Shanmugham Standards Track [Page 9]
RFC 6787 MRCPv2 November 2012
The complete message format in ABNF form is provided in Section 15 and is the normative format definition. Note that productions may be duplicated within the main body of the document for reading convenience. If a production in the body of the text conflicts with one in the normative definition, the latter rules.
Media Resource An entity on the speech processing server that can be controlled through MRCPv2.
MRCP Server Aggregate of one or more "Media Resource" entities on a server, exposed through MRCPv2. Often, 'server' in this document refers to an MRCP server.
MRCP Client An entity controlling one or more Media Resources through MRCPv2 ("Client" for short).
DTMF Dual-Tone Multi-Frequency; a method of transmitting key presses in-band, either as actual tones (Q.23 [Q.23]) or as named tone events (RFC 4733 [RFC4733]).
Endpointing The process of automatically detecting the beginning and end of speech in an audio stream. This is critical both for speech recognition and for automated recording as one would find in voice mail systems.
Hotword Mode A mode of speech recognition where a stream of utterances is evaluated for match against a small set of command words. This is generally employed either to trigger some action or to control the subsequent grammar to be used for further recognition.
The state-machine diagrams in this document do not show every possible method call. Rather, they reflect the state of the resource based on the methods that have moved to IN-PROGRESS or COMPLETE states (see Section 5.3). Note that since PENDING requests essentially have not affected the resource yet and are in the queue to be processed, they are not reflected in the state-machine diagrams.
This document defines many protocol headers that contain URIs (Uniform Resource Identifiers [RFC3986]) or lists of URIs for referencing media. The entire document, including the Security Considerations section (Section 12), assumes that HTTP or HTTP over TLS (HTTPS) [RFC2818] will be used as the URI addressing scheme unless otherwise stated. However, implementations MAY support other schemes (such as 'file'), provided they have addressed any security considerations described in this document and any others particular to the specific scheme. For example, implementations where the client and server both reside on the same physical hardware and the file system is secured by traditional user-level file access controls could be reasonable candidates for supporting the 'file' scheme.
A system using MRCPv2 consists of a client that requires the generation and/or consumption of media streams and a media resource server that has the resources or "engines" to process these streams as input or generate these streams as output. The client uses SIP and SDP to establish an MRCPv2 control channel with the server to use its media processing resources. MRCPv2 servers are addressed using SIP URIs.
SIP uses SDP with the offer/answer model described in RFC 3264 [RFC3264] to set up the MRCPv2 control channels and describe their characteristics. A separate MRCPv2 session is needed to control each of the media processing resources associated with the SIP dialog between the client and server. Within a SIP dialog, the individual resource control channels for the different resources are added or removed through SDP offer/answer carried in a SIP re-INVITE transaction.
The server, through the SDP exchange, provides the client with a difficult-to-guess, unambiguous channel identifier and a TCP port number (see Section 4.2). The client MAY then open a new TCP connection with the server on this port number. Multiple MRCPv2 channels can share a TCP connection between the client and the server. All MRCPv2 messages exchanged between the client and the server carry the specified channel identifier that the server MUST ensure is unambiguous among all MRCPv2 control channels that are active on that server. The client uses this channel identifier to indicate the media processing resource associated with that channel. For information on message framing, see Section 5.
SIP also establishes the media sessions between the client (or other source/sink of media) and the MRCPv2 server using SDP "m=" lines.
Burnett & Shanmugham Standards Track [Page 11]
RFC 6787 MRCPv2 November 2012
One or more media processing resources may share a media session under a SIP session, or each media processing resource may have its own media session.
The following diagram shows the general architecture of a system that uses MRCPv2. To simplify the diagram, only a few resources are shown.
An MRCPv2 server may offer one or more of the following media processing resources to its clients.
Basic Synthesizer A speech synthesizer resource that has very limited capabilities and can generate its media stream exclusively from concatenated audio clips. The speech data is described using a limited subset of the Speech Synthesis Markup Language (SSML) [W3C.REC-speech-synthesis-20040907] elements. A basic synthesizer MUST support the SSML tags <speak>, <audio>, <say-as>, and <mark>.
Burnett & Shanmugham Standards Track [Page 12]
RFC 6787 MRCPv2 November 2012
Speech Synthesizer A full-capability speech synthesis resource that can render speech from text. Such a synthesizer MUST have full SSML [W3C.REC-speech-synthesis-20040907] support.
Recorder A resource capable of recording audio and providing a URI pointer to the recording. A recorder MUST provide endpointing capabilities for suppressing silence at the beginning and end of a recording, and MAY also suppress silence in the middle of a recording. If such suppression is done, the recorder MUST maintain timing metadata to indicate the actual timestamps of the recorded media.
DTMF Recognizer A recognizer resource capable of extracting and interpreting Dual-Tone Multi-Frequency (DTMF) [Q.23] digits in a media stream and matching them against a supplied digit grammar. It could also do a semantic interpretation based on semantic tags in the grammar.
Speech Recognizer A full speech recognition resource that is capable of receiving a media stream containing audio and interpreting it to recognition results. It also has a natural language semantic interpreter to post-process the recognized data according to the semantic data in the grammar and provide semantic results along with the recognized input. The recognizer MAY also support enrolled grammars, where the client can enroll and create new personal grammars for use in future recognition operations.
Speaker Verifier A resource capable of verifying the authenticity of a claimed identity by matching a media stream containing spoken input to a pre-existing voiceprint. This may also involve matching the caller's voice against more than one voiceprint, also called multi-verification or speaker identification.
MRCPv2 requires a connection-oriented transport-layer protocol such as TCP to guarantee reliable sequencing and delivery of MRCPv2 control messages between the client and the server. In order to meet the requirements for security enumerated in SPEECHSC requirements [RFC4313], clients and servers MUST implement TLS as well. One or more connections between the client and the server can be shared among different MRCPv2 channels to the server. The individual messages carry the channel identifier to differentiate messages on different channels. MRCPv2 encoding is text based with mechanisms to carry embedded binary data. This allows arbitrary data like recognition grammars, recognition results, synthesizer speech markup, etc., to be carried in MRCPv2 messages. For information on message framing, see Section 5.
MRCPv2 employs SIP, in conjunction with SDP, as the session establishment and management protocol. The client reaches an MRCPv2 server using conventional INVITE and other SIP requests for establishing, maintaining, and terminating SIP dialogs. The SDP offer/answer exchange model over SIP is used to establish a resource control channel for each resource. The SDP offer/answer exchange is also used to establish media sessions between the server and the source or sink of audio.
The client needs a separate MRCPv2 resource control channel to control each media processing resource under the SIP dialog. A unique channel identifier string identifies these resource control channels. The channel identifier is a difficult-to-guess, unambiguous string followed by an "@", then by a string token specifying the type of resource. The server generates the channel identifier and MUST make sure it does not clash with the identifier of any other MRCP channel currently allocated by that server. MRCPv2 defines the following IANA-registered types of media processing
Burnett & Shanmugham Standards Track [Page 14]
RFC 6787 MRCPv2 November 2012
resources. Additional resource types and their associated methods/ events and state machines may be added as described below in Section 13.
The SIP INVITE or re-INVITE transaction and the SDP offer/answer exchange it carries contain "m=" lines describing the resource control channel to be allocated. There MUST be one SDP "m=" line for each MRCPv2 resource to be used in the session. This "m=" line MUST have a media type field of "application" and a transport type field of either "TCP/MRCPv2" or "TCP/TLS/MRCPv2". The port number field of the "m=" line MUST contain the "discard" port of the transport protocol (port 9 for TCP) in the SDP offer from the client and MUST contain the TCP listen port on the server in the SDP answer. The client may then either set up a TCP or TLS connection to that server port or share an already established connection to that port. Since MRCPv2 allows multiple sessions to share the same TCP connection, multiple "m=" lines in a single SDP document MAY share the same port field value; MRCPv2 servers MUST NOT assume any relationship between resources using the same port other than the sharing of the communication channel.
MRCPv2 resources do not use the port or format field of the "m=" line to distinguish themselves from other resources using the same channel. The client MUST specify the resource type identifier in the resource attribute associated with the control "m=" line of the SDP offer. The server MUST respond with the full Channel-Identifier (which includes the resource type identifier and a difficult-to- guess, unambiguous string) in the "channel" attribute associated with the control "m=" line of the SDP answer. To remain backwards compatible with conventional SDP usage, the format field of the "m=" line MUST have the arbitrarily selected value of "1".
When the client wants to add a media processing resource to the session, it issues a new SDP offer, according to the procedures of RFC 3264 [RFC3264], in a SIP re-INVITE request. The SDP offer/answer
Burnett & Shanmugham Standards Track [Page 15]
RFC 6787 MRCPv2 November 2012
exchange carried by this SIP transaction contains one or more additional control "m=" lines for the new resources to be allocated to the session. The server, on seeing the new "m=" line, allocates the resources (if they are available) and responds with a corresponding control "m=" line in the SDP answer carried in the SIP response. If the new resources are not available, the re-INVITE receives an error message, and existing media processing going on before the re-INVITE will continue as it was before. It is not possible to allocate more than one resource of each type. If a client requests more than one resource of any type, the server MUST behave as if the resources of that type (beyond the first one) are not available.
MRCPv2 clients and servers using TCP as a transport protocol MUST use the procedures specified in RFC 4145 [RFC4145] for setting up the TCP connection, with the considerations described hereby. Similarly, MRCPv2 clients and servers using TCP/TLS as a transport protocol MUST use the procedures specified in RFC 4572 [RFC4572] for setting up the TLS connection, with the considerations described hereby. The a=setup attribute, as described in RFC 4145 [RFC4145], MUST be "active" for the offer from the client and MUST be "passive" for the answer from the MRCPv2 server. The a=connection attribute MUST have a value of "new" on the very first control "m=" line offer from the client to an MRCPv2 server. Subsequent control "m=" line offers from the client to the MRCP server MAY contain "new" or "existing", depending on whether the client wants to set up a new connection or share an existing connection, respectively. If the client specifies a value of "new", the server MUST respond with a value of "new". If the client specifies a value of "existing", the server MUST respond. The legal values in the response are "existing" if the server prefers to share an existing connection or "new" if not. In the latter case, the client MUST initiate a new transport connection.
When the client wants to deallocate the resource from this session, it issues a new SDP offer, according to RFC 3264 [RFC3264], where the control "m=" line port MUST be set to 0. This SDP offer is sent in a SIP re-INVITE request. This deallocates the associated MRCPv2 identifier and resource. The server MUST NOT close the TCP or TLS connection if it is currently being shared among multiple MRCP channels. When all MRCP channels that may be sharing the connection are released and/or the associated SIP dialog is terminated, the client or server terminates the connection.
When the client wants to tear down the whole session and all its resources, it MUST issue a SIP BYE request to close the SIP session. This will deallocate all the control channels and resources allocated under the session.
Burnett & Shanmugham Standards Track [Page 16]
RFC 6787 MRCPv2 November 2012
All servers MUST support TLS. Servers MAY use TCP without TLS in controlled environments (e.g., not in the public Internet) where both nodes are inside a protected perimeter, for example, preventing access to the MRCP server from remote nodes outside the controlled perimeter. It is up to the client, through the SDP offer, to choose which transport it wants to use for an MRCPv2 session. Aside from the exceptions given above, when using TCP, the "m=" lines MUST conform to RFC4145 [RFC4145], which describes the usage of SDP for connection-oriented transport. When using TLS, the SDP "m=" line for the control stream MUST conform to Connection-Oriented Media (COMEDIA) over TLS [RFC4572], which specifies the usage of SDP for establishing a secure connection-oriented transport over TLS.
This first example shows the power of using SIP to route to the appropriate resource. In the example, note the use of a request to a domain's speech server service in the INVITE to mresources@example.com. The SIP routing machinery in the domain locates the actual server, mresources@server.example.com, which gets returned in the 200 OK. Note that "cmid" is defined in Section 4.4.
This example exchange adds a resource control channel for a synthesizer. Since a synthesizer also generates an audio stream, this interaction also creates a receive-only Real-Time Protocol (RTP) [RFC3550] media session for the server to send audio to. The SIP dialog with the media source/sink is independent of MRCP and is not shown.
This example exchange continues from the previous figure and allocates an additional resource control channel for a recognizer. Since a recognizer would need to receive an audio stream for recognition, this interaction also updates the audio stream to sendrecv, making it a two-way RTP media session.
This example exchange continues from the previous figure and deallocates the recognizer channel. Since a recognizer no longer needs to receive an audio stream, this interaction also updates the RTP media session to recvonly.
Since MRCPv2 resources either generate or consume media streams, the client or the server needs to associate media sessions with their corresponding resource or resources. More than one resource could be associated with a single media session or each resource could be assigned a separate media session. Also, note that more than one media session can be associated with a single resource if need be, but this scenario is not useful for the current set of resources. For example, a synthesizer and a recognizer could be associated to the same media session (m=audio line), if it is opened in "sendrecv" mode. Alternatively, the recognizer could have its own "sendonly" audio session, and the synthesizer could have its own "recvonly" audio session.
The association between control channels and their corresponding media sessions is established using a new "resource channel media identifier" media-level attribute ("cmid"). Valid values of this attribute are the values of the "mid" attribute defined in RFC 5888 [RFC5888]. If there is more than one audio "m=" line, then each audio "m=" line MUST have a "mid" attribute. Each control "m=" line MAY have one or more "cmid" attributes that match the resource control channel to the "mid" attributes of the audio "m=" lines it is associated with. Note that if a control "m=" line does not have a "cmid" attribute it will not be associated with any media. The operations on such a resource will hence be limited. For example, if it was a recognizer resource, the RECOGNIZE method requires an associated media to process while the INTERPRET method does not. The formatting of the "cmid" attribute is described by the following ABNF:
To allow this flexible mapping of media sessions to MRCPv2 control channels, a single audio "m=" line can be associated with multiple resources, or each resource can have its own audio "m=" line. For example, if the client wants to allocate a recognizer and a synthesizer and associate them with a single two-way audio stream, the SDP offer would contain two control "m=" lines and a single audio "m=" line with an attribute of "sendrecv". Each of the control "m=" lines would have a "cmid" attribute whose value matches the "mid" of the audio "m=" line. If, on the other hand, the client wants to allocate a recognizer and a synthesizer each with its own separate audio stream, the SDP offer would carry two control "m=" lines (one for the recognizer and another for the synthesizer) and two audio "m=" lines (one with the attribute "sendonly" and another with attribute "recvonly"). The "cmid" attribute of the recognizer control "m=" line would match the "mid" value of the "sendonly" audio "m=" line, and the "cmid" attribute of the synthesizer control "m=" line would match the "mid" attribute of the "recvonly" "m=" line.
When a server receives media (e.g., audio) on a media session that is associated with more than one media processing resource, it is the responsibility of the server to receive and fork the media to the resources that need to consume it. If multiple resources in an MRCPv2 session are generating audio (or other media) to be sent on a single associated media session, it is the responsibility of the server either to multiplex the multiple streams onto the single RTP session or to contain an embedded RTP mixer (see RFC 3550 [RFC3550]) to combine the multiple streams into one. In the former case, the media stream will contain RTP packets generated by different sources, and hence the packets will have different Synchronization Source Identifiers (SSRCs). In the latter case, the RTP packets will contain multiple Contributing Source Identifiers (CSRCs) corresponding to the original streams before being combined by the mixer. If an MRCPv2 server implementation neither multiplexes nor mixes, it MUST disallow the client from associating multiple such resources to a single audio stream by rejecting the SDP offer with a SIP 488 "Not Acceptable" error. Note that there is a large installed base that will return a SIP 501 "Not Implemented" error in this case. To facilitate interoperability with this installed base, new implementations SHOULD treat a 501 in this context as a 488 when it is received from an element known to be a legacy implementation.
The MRCPv2 messages defined in this document are transported over a TCP or TLS connection between the client and the server. The method for setting up this transport connection and the resource control channel is discussed in Sections 4.1 and 4.2. Multiple resource control channels between a client and a server that belong to different SIP dialogs can share one or more TLS or TCP connections between them; the server and client MUST support this mode of operation. Clients and servers MUST use the MRCPv2 channel identifier, carried in the Channel-Identifier header field in individual MRCPv2 messages, to differentiate MRCPv2 messages from different resource channels (see Section 6.2.1 for details). All MRCPv2 servers MUST support TLS. Servers MAY use TCP without TLS in controlled environments (e.g., not in the public Internet) where both nodes are inside a protected perimeter, for example, preventing access to the MRCP server from remote nodes outside the controlled perimeter. It is up to the client to choose which mode of transport it wants to use for an MRCPv2 session.
Most examples from here on show only the MRCPv2 messages and do not show the SIP messages that may have been used to establish the MRCPv2 control channel.
If an MRCP client notices that the underlying connection has been closed for one of its MRCP channels, and it has not previously initiated a re-INVITE to close that channel, it MUST send a BYE to close down the SIP dialog and all other MRCP channels. If an MRCP server notices that the underlying connection has been closed for one of its MRCP channels, and it has not previously received and accepted a re-INVITE closing that channel, then it MUST send a BYE to close down the SIP dialog and all other MRCP channels.
Except as otherwise indicated, MRCPv2 messages are Unicode encoded in UTF-8 (RFC 3629 [RFC3629]) to allow many different languages to be represented. DEFINE-GRAMMAR (Section 9.8), for example, is one such exception, since its body can contain arbitrary XML in arbitrary (but specified via XML) encodings. MRCPv2 also allows message bodies to be represented in other character sets (for example, ISO 8859-1 [ISO.8859-1.1987]) because, in some locales, other character sets are already in widespread use. The MRCPv2 headers (the first line of an MRCP message) and header field names use only the US-ASCII subset of UTF-8.
Burnett & Shanmugham Standards Track [Page 24]
RFC 6787 MRCPv2 November 2012
Lines are terminated by CRLF (carriage return, then line feed). Also, some parameters in the message may contain binary data or a record spanning multiple lines. Such fields have a length value associated with the parameter, which indicates the number of octets immediately following the parameter.
The MRCPv2 message set consists of requests from the client to the server, responses from the server to the client, and asynchronous events from the server to the client. All these messages consist of a start-line, one or more header fields, an empty line (i.e., a line with nothing preceding the CRLF) indicating the end of the header fields, and an optional message body.
The message-body contains resource-specific and message-specific data. The actual media types used to carry the data are specified in the sections defining the individual messages. Generic header fields are described in Section 6.2.
If a message contains a message body, the message MUST contain content-headers indicating the media type and encoding of the data in the message body.
Request, response and event messages (described in following sections) include the version of MRCP that the message conforms to. Version compatibility rules follow [H3.1] regarding version ordering, compliance requirements, and upgrading of version numbers. The version information is indicated by "MRCP" (as opposed to "HTTP" in [H3.1]) or "MRCP/2.0" (as opposed to "HTTP/1.1" in [H3.1]). To be compliant with this specification, clients and servers sending MRCPv2
Burnett & Shanmugham Standards Track [Page 25]
RFC 6787 MRCPv2 November 2012
messages MUST indicate an mrcp-version of "MRCP/2.0". ABNF productions using mrcp-version can be found in Sections 5.2, 5.3, and 5.5.
mrcp-version = "MRCP" "/" 1*2DIGIT "." 1*2DIGIT
The message-length field specifies the length of the message in octets, including the start-line, and MUST be the second token from the beginning of the message. This is to make the framing and parsing of the message simpler to do. This field specifies the length of the message including data that may be encoded into the body of the message. Note that this value MAY be given as a fixed- length integer that is zero-padded (with leading zeros) in order to eliminate or reduce inefficiency in cases where the message-length value would change as a result of the length of the message-length token itself. This value, as with all lengths in MRCP, is to be interpreted as a base-10 number. In particular, leading zeros do not indicate that the value is to be interpreted as a base-8 number.
message-length = 1*19DIGIT
The following sample MRCP exchange demonstrates proper message-length values. The values for message-length have been removed from all other examples in the specification and replaced by '...' to reduce confusion in the case of minor message-length computation errors in those examples.
C->S: MRCP/2.0 877 INTERPRET 543266 Channel-Identifier:32AECB23433801@speechrecog Interpret-Text:may I speak to Andre Roy Content-Type:application/srgs+xml Content-ID:<request1@form-level.store> Content-Length:661
<?xml version="1.0"?> <!-- the default grammar language is US English --> <grammar xmlns="http://www.w3.org/2001/06/grammar" xml:lang="en-US" version="1.0" root="request"> <!-- single language attachment to tokens --> <rule id="yes"> <one-of> <item xml:lang="fr-CA">oui</item> <item xml:lang="en-US">yes</item> </one-of> </rule>
Burnett & Shanmugham Standards Track [Page 26]
RFC 6787 MRCPv2 November 2012
<!-- single language attachment to a rule expansion --> <rule id="request"> may I speak to <one-of xml:lang="fr-CA"> <item>Michel Tremblay</item> <item>Andre Roy</item> </one-of> </rule> </grammar>
<?xml version="1.0"?> <result xmlns="urn:ietf:params:xml:ns:mrcpv2" xmlns:ex="http://www.example.com/example" grammar="session:request1@form-level.store"> <interpretation> <instance name="Person"> <ex:Person> <ex:Name> Andre Roy </ex:Name> </ex:Person> </instance> <input> may I speak to Andre Roy </input> </interpretation> </result>
All MRCPv2 messages, responses and events MUST carry the Channel- Identifier header field so the server or client can differentiate messages from different control channels that may share the same transport connection.
In the resource-specific header field descriptions in Sections 8-11, a header field is disallowed on a method (request, response, or event) for that resource unless specifically listed as being allowed. Also, the phrasing "This header field MAY occur on method X" indicates that the header field is allowed on that method but is not required to be used in every instance of that method.
An MRCPv2 request consists of a Request line followed by the message header section and an optional message body containing data specific to the request message.
The Request message from a client to the server includes within the first line the method to be applied, a method tag for that request and the version of the protocol in use.
The mrcp-version field is the MRCP protocol version that is being used by the client.
The message-length field specifies the length of the message, including the start-line.
Details about the mrcp-version and message-length fields are given in Section 5.1.
The method-name field identifies the specific request that the client is making to the server. Each resource supports a subset of the MRCPv2 methods. The subset for each resource is defined in the section of the specification for the corresponding resource.
The request-id field is a unique identifier representable as an unsigned 32-bit integer created by the client and sent to the server. Clients MUST utilize monotonically increasing request-ids for consecutive requests within an MRCP session. The request-id space is linear (i.e., not mod(32)), so the space does not wrap, and validity can be checked with a simple unsigned comparison operation. The client may choose any initial value for its first request, but a small integer is RECOMMENDED to avoid exhausting the space in long sessions. If the server receives duplicate or out-of-order requests, the server MUST reject the request with a response code of 410. Since request-ids are scoped to the MRCP session, they are unique across all TCP connections and all resource channels in the session.
The server resource MUST use the client-assigned identifier in its response to the request. If the request does not complete
Burnett & Shanmugham Standards Track [Page 28]
RFC 6787 MRCPv2 November 2012
synchronously, future asynchronous events associated with this request MUST carry the client-assigned request-id.
After receiving and interpreting the request message for a method, the server resource responds with an MRCPv2 response message. The response consists of a response line followed by the message header section and an optional message body containing data specific to the method.
The mrcp-version field MUST contain the version of the request if supported; otherwise, it MUST contain the highest version of MRCP supported by the server.
The message-length field specifies the length of the message, including the start-line.
Details about the mrcp-version and message-length fields are given in Section 5.1.
The request-id used in the response MUST match the one sent in the corresponding request message.
The status-code field is a 3-digit code representing the success or failure or other status of the request.
status-code = 3DIGIT
The request-state field indicates if the action initiated by the Request is PENDING, IN-PROGRESS, or COMPLETE. The COMPLETE status means that the request was processed to completion and that there will be no more events or other messages from that resource to the client with that request-id. The PENDING status means that the request has been placed in a queue and will be processed in first-in- first-out order. The IN-PROGRESS status means that the request is being processed and is not yet complete. A PENDING or IN-PROGRESS status indicates that further Event messages may be delivered with that request-id.
The status codes are classified under the Success (2xx), Client Failure (4xx), and Server Failure (5xx) codes.
+------------+--------------------------------------------------+ | Code | Meaning | +------------+--------------------------------------------------+ | 200 | Success | | 201 | Success with some optional header fields ignored | +------------+--------------------------------------------------+
Success (2xx)
+--------+----------------------------------------------------------+ | Code | Meaning | +--------+----------------------------------------------------------+ | 401 | Method not allowed | | 402 | Method not valid in this state | | 403 | Unsupported header field | | 404 | Illegal value for header field. This is the error for a | | | syntax violation. | | 405 | Resource not allocated for this session or does not | | | exist | | 406 | Mandatory Header Field Missing | | 407 | Method or Operation Failed (e.g., Grammar compilation | | | failed in the recognizer. Detailed cause codes might be | | | available through a resource-specific header.) | | 408 | Unrecognized or unsupported message entity | | 409 | Unsupported Header Field Value. This is a value that is | | | syntactically legal but exceeds the implementation's | | | capabilities or expectations. | | 410 | Non-Monotonic or Out-of-order sequence number in request.| | 411-420| Reserved for future assignment | +--------+----------------------------------------------------------+
Client Failure (4xx)
+------------+--------------------------------+ | Code | Meaning | +------------+--------------------------------+ | 501 | Server Internal Error | | 502 | Protocol Version not supported | | 503 | Reserved for future assignment | | 504 | Message too large | +------------+--------------------------------+
The server resource may need to communicate a change in state or the occurrence of a certain event to the client. These messages are used when a request does not complete immediately and the response returns a status of PENDING or IN-PROGRESS. The intermediate results and events of the request are indicated to the client through the event message from the server. The event message consists of an event header line followed by the message header section and an optional message body containing data specific to the event message. The header line has the request-id of the corresponding request and status value. The request-state value is COMPLETE if the request is done and this was the last event, else it is IN-PROGRESS.
The mrcp-version used here is identical to the one used in the Request/Response line and indicates the highest version of MRCP running on the server.
The message-length field specifies the length of the message, including the start-line.
Details about the mrcp-version and message-length fields are given in Section 5.1.
The event-name identifies the nature of the event generated by the media resource. The set of valid event names depends on the resource generating it. See the corresponding resource-specific section of the document.
The request-id used in the event MUST match the one sent in the request that caused this event.
The request-state indicates whether the Request/Command causing this event is complete or still in progress and whether it is the same as the one mentioned in Section 5.3. The final event for a request has a COMPLETE status indicating the completion of the request.
Burnett & Shanmugham Standards Track [Page 31]
RFC 6787 MRCPv2 November 2012
6. MRCPv2 Generic Methods, Headers, and Result Structure
MRCPv2 supports a set of methods and header fields that are common to all resources. These are discussed here; resource-specific methods and header fields are discussed in the corresponding resource- specific section of the document.
The SET-PARAMS method, from the client to the server, tells the MRCPv2 resource to define parameters for the session, such as voice characteristics and prosody on synthesizers, recognition timers on recognizers, etc. If the server accepts and sets all parameters, it MUST return a response status-code of 200. If it chooses to ignore some optional header fields that can be safely ignored without affecting operation of the server, it MUST return 201.
If one or more of the header fields being sent is incorrect, error 403, 404, or 409 MUST be returned as follows:
o If one or more of the header fields being set has an illegal value, the server MUST reject the request with a 404 Illegal Value for Header Field.
o If one or more of the header fields being set is unsupported for the resource, the server MUST reject the request with a 403 Unsupported Header Field, except as described in the next paragraph.
o If one or more of the header fields being set has an unsupported value, the server MUST reject the request with a 409 Unsupported Header Field Value, except as described in the next paragraph.
If both error 404 and another error have occurred, only error 404 MUST be returned. If both errors 403 and 409 have occurred, but not error 404, only error 403 MUST be returned.
Burnett & Shanmugham Standards Track [Page 32]
RFC 6787 MRCPv2 November 2012
If error 403, 404, or 409 is returned, the response MUST include the bad or unsupported header fields and their values exactly as they were sent from the client. Session parameters modified using SET-PARAMS do not override parameters explicitly specified on individual requests or requests that are IN-PROGRESS.
The GET-PARAMS method, from the client to the server, asks the MRCPv2 resource for its current session parameters, such as voice characteristics and prosody on synthesizers, recognition timers on recognizers, etc. For every header field the client sends in the request without a value, the server MUST include the header field and its corresponding value in the response. If no parameter header fields are specified by the client, then the server MUST return all the settable parameters and their values in the corresponding header section of the response, including vendor-specific parameters. Such wildcard parameter requests can be very processing-intensive, since the number of settable parameters can be large depending on the implementation. Hence, it is RECOMMENDED that the client not use the wildcard GET-PARAMS operation very often. Note that GET-PARAMS returns header field values that apply to the whole session and not values that have a request-level scope. For example, Input-Waveform- URI is a request-level header field and thus would not be returned by GET-PARAMS.
If all of the header fields requested are supported, the server MUST return a response status-code of 200. If some of the header fields being retrieved are unsupported for the resource, the server MUST reject the request with a 403 Unsupported Header Field. Such a response MUST include the unsupported header fields exactly as they were sent from the client, without values.
All MRCPv2 header fields, which include both the generic-headers defined in the following subsections and the resource-specific header fields defined later, follow the same generic format as that given in Section 3.1 of RFC 5322 [RFC5322]. Each header field consists of a name followed by a colon (":") and the value. Header field names are case-insensitive. The value MAY be preceded by any amount of LWS (linear white space), though a single SP (space) is preferred. Header fields may extend over multiple lines by preceding each extra line with at least one SP or HT (horizontal tab).
generic-field = field-name ":" [ field-value ] field-name = token field-value = *LWS field-content *( CRLF 1*LWS field-content) field-content = <the OCTETs making up the field-value and consisting of either *TEXT or combinations of token, separators, and quoted-string>
The field-content does not include any leading or trailing LWS (i.e., linear white space occurring before the first non-whitespace character of the field-value or after the last non-whitespace character of the field-value). Such leading or trailing LWS MAY be removed without changing the semantics of the field value. Any LWS that occurs between field-content MAY be replaced with a single SP before interpreting the field value or forwarding the message downstream.
MRCPv2 servers and clients MUST NOT depend on header field order. It is RECOMMENDED to send general-header fields first, followed by request-header or response-header fields, and ending with the entity- header fields. However, MRCPv2 servers and clients MUST be prepared to process the header fields in any order. The only exception to this rule is when there are multiple header fields with the same name in a message.
Multiple header fields with the same name MAY be present in a message if and only if the entire value for that header field is defined as a comma-separated list [i.e., #(values)].
Burnett & Shanmugham Standards Track [Page 34]
RFC 6787 MRCPv2 November 2012
Since vendor-specific parameters may be order-dependent, it MUST be possible to combine multiple header fields of the same name into one "name:value" pair without changing the semantics of the message, by appending each subsequent value to the first, each separated by a comma. The order in which header fields with the same name are received is therefore significant to the interpretation of the combined header field value, and thus an intermediary MUST NOT change the order of these values when a message is forwarded.
All MRCPv2 requests, responses, and events MUST contain the Channel- Identifier header field. The value is allocated by the server when a control channel is added to the session and communicated to the client by the "a=channel" attribute in the SDP answer from the server. The header field value consists of 2 parts separated by the '@' symbol. The first part is an unambiguous string identifying the MRCPv2 session. The second part is a string token that specifies one of the media processing resource types listed in Section 3.1. The unambiguous string (first part) MUST be difficult to guess, unique among the resource instances managed by the server, and common to all resource channels with that server established through a single SIP dialog.
The Accept header field follows the syntax defined in [H14.1]. The semantics are also identical, with the exception that if no Accept header field is present, the server MUST assume a default value that is specific to the resource type that is being controlled. This default value can be changed for a resource on a session by sending this header field in a SET-PARAMS method. The current default value of this header field for a resource in a session can be found through a GET-PARAMS method. This header field MAY occur on any request.
In a request, this header field indicates the list of request-ids to which the request applies. This is useful when there are multiple requests that are PENDING or IN-PROGRESS and the client wants this request to apply to one or more of these specifically.
In a response, this header field returns the list of request-ids that the method modified or affected. There could be one or more requests in a request-state of PENDING or IN-PROGRESS. When a method affecting one or more PENDING or IN-PROGRESS requests is sent from the client to the server, the response MUST contain the list of request-ids that were affected or modified by this command in its header section.
The Active-Request-Id-List is only used in requests and responses, not in events.
For example, if a STOP request with no Active-Request-Id-List is sent to a synthesizer resource that has one or more SPEAK requests in the PENDING or IN-PROGRESS state, all SPEAK requests MUST be cancelled, including the one IN-PROGRESS. The response to the STOP request contains in the Active-Request-Id-List value the request-ids of all the SPEAK requests that were terminated. After sending the STOP response, the server MUST NOT send any SPEAK-COMPLETE or RECOGNITION- COMPLETE events for the terminated requests.
When any server resource generates a "barge-in-able" event, it also generates a unique tag. The tag is sent as this header field's value in an event to the client. The client then acts as an intermediary among the server resources and sends a BARGE-IN-OCCURRED method to the synthesizer server resource with the Proxy-Sync-Id it received
Burnett & Shanmugham Standards Track [Page 36]
RFC 6787 MRCPv2 November 2012
from the server resource. When the recognizer and synthesizer resources are part of the same session, they may choose to work together to achieve quicker interaction and response. Here, the Proxy-Sync-Id helps the resource receiving the event, intermediated by the client, to decide if this event has been processed through a direct interaction of the resources. This header field MAY occur only on events and the BARGE-IN-OCCURRED method. The name of this header field contains the word 'proxy' only for historical reasons and does not imply that a proxy server is involved.
See [H14.2]. This specifies the acceptable character sets for entities returned in the response or events associated with this request. This is useful in specifying the character set to use in the Natural Language Semantic Markup Language (NLSML) results of a RECOGNITION-COMPLETE event. This header field is only used on requests.
See [H14.17]. MRCPv2 supports a restricted set of registered media types for content, including speech markup, grammar, and recognition results. The content types applicable to each MRCPv2 resource-type are specified in the corresponding section of the document and are registered in the MIME Media Types registry maintained by IANA. The multipart content type "multipart/mixed" is supported to communicate multiple of the above mentioned contents, in which case the body parts MUST NOT contain any MRCPv2-specific header fields. This header field MAY occur on all messages.
This header field contains an ID or name for the content by which it can be referenced. This header field operates according to the specification in RFC 2392 [RFC2392] and is required for content disambiguation in multipart messages. In MRCPv2, whenever the associated content is stored by either the client or the server, it MUST be retrievable using this ID. Such content can be referenced later in a session by addressing it with the 'session' URI scheme described in Section 13.6. This header field MAY occur on all messages.
Note, however, that the base URI of the contents within the entity- body may be redefined within that entity-body. An example of this would be multipart media, which in turn can have multiple entities within it. This header field MAY occur on all messages.
The Content-Encoding entity-header is used as a modifier to the Content-Type. When present, its value indicates what additional content encoding has been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtain the Media Type referenced by the Content-Type header field. Content-Encoding is primarily used to allow a document to be compressed without losing the identity of its underlying media type. Note that the SIP session can be used to determine accepted encodings (see Section 7). This header field MAY occur on all messages.
The Content-Location entity-header MAY be used to supply the resource location for the entity enclosed in the message when that entity is accessible from a location separate from the requested resource's URI. Refer to [H14.14].
The Content-Location value is a statement of the location of the resource corresponding to this particular entity at the time of the request. This header field is provided for optimization purposes only. The receiver of this header field MAY assume that the entity being sent is identical to what would have been retrieved or might already have been retrieved from the Content-Location URI.
For example, if the client provided a grammar markup inline, and it had previously retrieved it from a certain URI, that URI can be provided as part of the entity, using the Content-Location header field. This allows a resource like the recognizer to look into its cache to see if this grammar was previously retrieved, compiled, and cached. In this case, it might optimize by using the previously compiled grammar object.
If the Content-Location is a relative URI, the relative URI is interpreted relative to the Content-Base URI. This header field MAY occur on all messages.
This header field contains the length of the content of the message body (i.e., after the double CRLF following the last header field). Unlike in HTTP, it MUST be included in all messages that carry content beyond the header section. If it is missing, a default value of zero is assumed. Otherwise, it is interpreted according to [H14.13]. When a message having no use for a message body contains one, i.e., the Content-Length is non-zero, the receiver MUST ignore the content of the message body. This header field MAY occur on all messages.
When the recognizer or synthesizer needs to fetch documents or other resources, this header field controls the corresponding URI access properties. This defines the timeout for content that the server may
Burnett & Shanmugham Standards Track [Page 39]
RFC 6787 MRCPv2 November 2012
need to fetch over the network. The value is interpreted to be in milliseconds and ranges from 0 to an implementation-specific maximum value. It is RECOMMENDED that servers be cautious about accepting long timeout values. The default value for this header field is implementation specific. This header field MAY occur in DEFINE- GRAMMAR, RECOGNIZE, SPEAK, SET-PARAMS, or GET-PARAMS.
If the server implements content caching, it MUST adhere to the cache correctness rules of HTTP 1.1 [RFC2616] when accessing and caching stored content. In particular, the "expires" and "cache-control" header fields of the cached URI or document MUST be honored and take precedence over the Cache-Control defaults set by this header field. The Cache-Control directives are used to define the default caching algorithms on the server for the session or request. The scope of the directive is based on the method it is sent on. If the directive is sent on a SET-PARAMS method, it applies for all requests for external documents the server makes during that session, unless it is overridden by a Cache-Control header field on an individual request. If the directives are sent on any other requests, they apply only to external document requests the server makes for that request. An empty Cache-Control header field on the GET-PARAMS method is a request for the server to return the current Cache-Control directives setting on the server. This header field MAY occur only on requests.
Here, delta-seconds is a decimal time value specifying the number of seconds since the instant the message response or data was received by the server.
The different cache-directive options allow the client to ask the server to override the default cache expiration mechanisms:
Burnett & Shanmugham Standards Track [Page 40]
RFC 6787 MRCPv2 November 2012
max-age Indicates that the client can tolerate the server using content whose age is no greater than the specified time in seconds. Unless a "max-stale" directive is also included, the client is not willing to accept a response based on stale data.
min-fresh Indicates that the client is willing to accept a server response with cached data whose expiration is no less than its current age plus the specified time in seconds. If the server's cache time-to-live exceeds the client-supplied min-fresh value, the server MUST NOT utilize cached content.
max-stale Indicates that the client is willing to allow a server to utilize cached data that has exceeded its expiration time. If "max-stale" is assigned a value, then the client is willing to allow the server to use cached data that has exceeded its expiration time by no more than the specified number of seconds. If no value is assigned to "max-stale", then the client is willing to allow the server to use stale data of any age.
If the server cache is requested to use stale response/data without validation, it MAY do so only if this does not conflict with any "MUST"-level requirements concerning cache validation (e.g., a "must- revalidate" Cache-Control directive in the HTTP 1.1 specification pertaining to the corresponding URI).
If both the MRCPv2 Cache-Control directive and the cached entry on the server include "max-age" directives, then the lesser of the two values is used for determining the freshness of the cached entry for that request.
This header field MAY be sent as part of a SET-PARAMS/GET-PARAMS method to set or retrieve the logging tag for logs generated by the server. Once set, the value persists until a new value is set or the session ends. The MRCPv2 server MAY provide a mechanism to create subsets of its output logs so that system administrators can examine or extract only the log file portion during which the logging tag was set to a certain value.
It is RECOMMENDED that clients include in the logging tag information to identify the MRCPv2 client User Agent, so that one can determine which MRCPv2 client request generated a given log message at the server. It is also RECOMMENDED that MRCPv2 clients not log
Burnett & Shanmugham Standards Track [Page 41]
RFC 6787 MRCPv2 November 2012
personally identifiable information such as credit card numbers and national identification numbers.
Since the associated HTTP client on an MRCPv2 server fetches documents for processing on behalf of the MRCPv2 client, the cookie store in the HTTP client of the MRCPv2 server is treated as an extension of the cookie store in the HTTP client of the MRCPv2 client. This requires that the MRCPv2 client and server be able to synchronize their common cookie store as needed. To enable the MRCPv2 client to push its stored cookies to the MRCPv2 server and get new cookies from the MRCPv2 server stored back to the MRCPv2 client, the Set-Cookie entity-header field MAY be included in MRCPv2 requests to update the cookie store on a server and be returned in final MRCPv2 responses or events to subsequently update the client's own cookie store. The stored cookies on the server persist for the duration of the MRCPv2 session and MUST be destroyed at the end of the session. To ensure support for cookies, MRCPv2 clients and servers MUST support the Set-Cookie entity-header field.
Note that it is the MRCPv2 client that determines which, if any, cookies are sent to the server. There is no requirement that all cookies be shared. Rather, it is RECOMMENDED that MRCPv2 clients communicate only cookies needed by the MRCPv2 server to process its requests.
set-cookie = "Set-Cookie:" cookies CRLF cookies = cookie *("," *LWS cookie) cookie = attribute "=" value *(";" cookie-av) cookie-av = "Comment" "=" value / "Domain" "=" value / "Max-Age" "=" value / "Path" "=" value / "Secure" / "Version" "=" 1*19DIGIT / "Age" "=" delta-seconds
The Set-Cookie header field is specified in RFC 6265 [RFC6265]. The "Age" attribute is introduced in this specification to indicate the age of the cookie and is OPTIONAL. An MRCPv2 client or server MUST calculate the age of the cookie according to the age calculation rules in the HTTP/1.1 specification [RFC2616] and append the "Age" attribute accordingly. This attribute is provided because time may have passed since the client received the cookie from an HTTP server. Rather than having the client reduce Max-Age by the actual age, it passes Max-Age verbatim and appends the "Age" attribute, thus maintaining the cookie as received while still accounting for the fact that time has passed.
The MRCPv2 client or server MUST supply defaults for the "Domain" and "Path" attributes, as specified in RFC 6265, if they are omitted by the HTTP origin server. Note that there is no leading dot present in the "Domain" attribute value in this case. Although an explicitly specified "Domain" value received via the HTTP protocol may be modified to include a leading dot, an MRCPv2 client or server MUST NOT modify the "Domain" value when received via the MRCPv2 protocol.
An MRCPv2 client or server MAY combine multiple cookie header fields of the same type into a single "field-name:field-value" pair as described in Section 6.2.
The Set-Cookie header field MAY be specified in any request that subsequently results in the server performing an HTTP access. When a server receives new cookie information from an HTTP origin server, and assuming the cookie store is modified according to RFC 6265, the server MUST return the new cookie information in the MRCPv2 COMPLETE response or event, as appropriate, to allow the client to update its own cookie store.
Burnett & Shanmugham Standards Track [Page 43]
RFC 6787 MRCPv2 November 2012
The SET-PARAMS request MAY specify the Set-Cookie header field to update the cookie store on a server. The GET-PARAMS request MAY be used to return the entire cookie store of "Set-Cookie" type cookies to the client.
vendor-specific-av-pair = vendor-av-pair-name "=" value
vendor-av-pair-name = 1*UTFCHAR
Header fields of this form MAY be sent in any method (request) and are used to manage implementation-specific parameters on the server side. The vendor-av-pair-name follows the reverse Internet Domain Name convention (see Section 13.1.6 for syntax and registration information). The value of the vendor attribute is specified after the "=" symbol and MAY be quoted. For example:
When used in GET-PARAMS to get the current value of these parameters from the server, this header field value MAY contain a semicolon- separated list of implementation-specific attribute names.
Result data from the server for the Recognizer and Verifier resources is carried as a typed media entity in the MRCPv2 message body of various events. The Natural Language Semantics Markup Language (NLSML), an XML markup based on an early draft from the W3C, is the default standard for returning results back to the client. Hence, all servers implementing these resource types MUST support the media type 'application/nlsml+xml'. The Extensible MultiModal Annotation (EMMA) [W3C.REC-emma-20090210] format can be used to return results as well. This can be done by negotiating the format at session establishment time with SDP (a=resultformat:application/emma+xml) or with SIP (Allow/Accept). With SIP, for example, if a client wants
Burnett & Shanmugham Standards Track [Page 44]
RFC 6787 MRCPv2 November 2012
results in EMMA, an MRCPv2 server can route the request to another server that supports EMMA by inspecting the SIP header fields, rather than having to inspect the SDP.
MRCPv2 uses this representation to convey content among the clients and servers that generate and make use of the markup. MRCPv2 uses NSLML specifically to convey recognition, enrollment, and verification results between the corresponding resource on the MRCPv2 server and the MRCPv2 client. Details of this result format are fully described in Section 6.3.1.
The Natural Language Semantics Markup Language (NLSML) is an XML data structure with elements and attributes designed to carry result information from recognizer (including enrollment) and verifier resources. The normative definition of NLSML is the RelaxNG schema in Section 16.1. Note that the elements and attributes of this format are defined in the MRCPv2 namespace. In the result structure, they must either be prefixed by a namespace prefix declared within the result or must be children of an element identified as belonging to the respective namespace. For details on how to use XML Namespaces, see [W3C.REC-xml-names11-20040204]. Section 2 of [W3C.REC-xml-names11-20040204] provides details on how to declare namespaces and namespace prefixes.
The root element of NLSML is <result>. Optional child elements are <interpretation>, <enrollment-result>, and <verification-result>, at least one of which must be present. A single <result> MAY contain any or all of the optional child elements. Details of the <result> and <interpretation> elements and their subelements and attributes
Burnett & Shanmugham Standards Track [Page 45]
RFC 6787 MRCPv2 November 2012
can be found in Section 9.6. Details of the <enrollment-result> element and its subelements can be found in Section 9.7. Details of the <verification-result> element and its subelements can be found in Section 11.5.2.
Server resources may be discovered and their capabilities learned by clients through standard SIP machinery. The client MAY issue a SIP OPTIONS transaction to a server, which has the effect of requesting the capabilities of the server. The server MUST respond to such a request with an SDP-encoded description of its capabilities according to RFC 3264 [RFC3264]. The MRCPv2 capabilities are described by a single "m=" line containing the media type "application" and transport type "TCP/TLS/MRCPv2" or "TCP/MRCPv2". There MUST be one "resource" attribute for each media resource that the server supports, and it has the resource type identifier as its value.
The SDP description MUST also contain "m=" lines describing the audio capabilities and the coders the server supports.
In this example, the client uses the SIP OPTIONS method to query the capabilities of the MRCPv2 server.
This resource processes text markup provided by the client and generates a stream of synthesized speech in real time. Depending upon the server implementation and capability of this resource, the client can also dictate parameters of the synthesized speech such as voice characteristics, speaker speed, etc.
The synthesizer resource is controlled by MRCPv2 requests from the client. Similarly, the resource can respond to these requests or generate asynchronous events to the client to indicate conditions of interest to the client during the generation of the synthesized speech stream.
This section applies for the following resource types:
o speechsynth
o basicsynth
The capabilities of these resources are defined in Section 3.1.
The synthesizer maintains a state machine to process MRCPv2 requests from the client. The state transitions shown below describe the states of the synthesizer and reflect the state of the request at the head of the synthesizer resource queue. A SPEAK request in the PENDING state can be deleted or stopped by a STOP request without affecting the state of the resource.
A synthesizer method can contain header fields containing request options and information to augment the Request, Response, or Event it is associated with.
This header field MAY be specified in a CONTROL method and controls the amount to jump forward or backward in an active SPEAK request. A '+' or '-' indicates a relative value to what is being currently played. This header field MAY also be specified in a SPEAK request as a desired offset into the synthesized speech. In this case, the synthesizer MUST begin speaking from this amount of time into the speech markup. Note that an offset that extends beyond the end of
Burnett & Shanmugham Standards Track [Page 49]
RFC 6787 MRCPv2 November 2012
the produced speech will result in audio of length zero. The different speech length units supported are dependent on the synthesizer implementation. If the synthesizer resource does not support a unit for the operation, the resource MUST respond with a status-code of 409 "Unsupported Header Field Value".
This header field MAY be sent as part of the SPEAK method to enable "kill-on-barge-in" support. If enabled, the SPEAK method is interrupted by DTMF input detected by a signal detector resource or by the start of speech sensed or recognized by the speech recognizer resource.
The client MUST send a BARGE-IN-OCCURRED method to the synthesizer resource when it receives a barge-in-able event from any source. This source could be a synthesizer resource or signal detector resource and MAY be either local or distributed. If this header field is not specified in a SPEAK request or explicitly set by a SET-PARAMS, the default value for this header field is "true".
If the recognizer or signal detector resource is on the same server as the synthesizer and both are part of the same session, the server MAY work with both to provide internal notification to the synthesizer so that audio may be stopped without having to wait for the client's BARGE-IN-OCCURRED event.
It is generally RECOMMENDED when playing a prompt to the user with Kill-On-Barge-In and asking for input, that the client issue the RECOGNIZE request ahead of the SPEAK request for optimum performance
Burnett & Shanmugham Standards Track [Page 50]
RFC 6787 MRCPv2 November 2012
and user experience. This way, it is guaranteed that the recognizer is online before the prompt starts playing and the user's speech will not be truncated at the beginning (especially for power users).
This header field MAY be part of the SET-PARAMS/GET-PARAMS or SPEAK request from the client to the server and specifies a URI that references the profile of the speaker. Speaker profiles are collections of voice parameters like gender, accent, etc.
This header field MUST be specified in a SPEAK-COMPLETE event coming from the synthesizer resource to the client. This indicates the reason the SPEAK request completed.
This header field MAY be specified in a SPEAK-COMPLETE event coming from the synthesizer resource to the client. This contains the reason text behind the SPEAK request completion. This header field communicates text describing the reason for the failure, such as an error in parsing the speech markup text.
The completion reason text is provided for client use in logs and for debugging and instrumentation purposes. Clients MUST NOT interpret the completion reason text.
The "Voice-" parameters are derived from the similarly named attributes of the voice element specified in W3C's Speech Synthesis Markup Language Specification (SSML) [W3C.REC-speech-synthesis-20040907]. Legal values for these parameters are as defined in that specification.
These header fields MAY be sent in SET-PARAMS or GET-PARAMS requests to define or get default values for the entire session or MAY be sent in the SPEAK request to define default values for that SPEAK request. Note that SSML content can itself set these values internal to the SSML document, of course.
Burnett & Shanmugham Standards Track [Page 52]
RFC 6787 MRCPv2 November 2012
Voice parameter header fields MAY also be sent in a CONTROL method to affect a SPEAK request in progress and change its behavior on the fly. If the synthesizer resource does not support this operation, it MUST reject the request with a status-code of 403 "Unsupported Header Field".
prosody-param-name is any one of the attribute names under the prosody element specified in W3C's Speech Synthesis Markup Language Specification [W3C.REC-speech-synthesis-20040907]. The prosody- param-value is any one of the value choices of the corresponding prosody element attribute from that specification.
These header fields MAY be sent in SET-PARAMS or GET-PARAMS requests to define or get default values for the entire session or MAY be sent in the SPEAK request to define default values for that SPEAK request. Furthermore, these attributes can be part of the speech text marked up in SSML.
The prosody parameter header fields in the SET-PARAMS or SPEAK request only apply if the speech data is of type 'text/plain' and does not use a speech markup format.
These prosody parameter header fields MAY also be sent in a CONTROL method to affect a SPEAK request in progress and change its behavior on the fly. If the synthesizer resource does not support this operation, it MUST respond back to the client with a status-code of 403 "Unsupported Header Field".
This header field contains timestamp information in a "timestamp" field. This is a Network Time Protocol (NTP) [RFC5905] timestamp, a 64-bit number in decimal form. It MUST be synced with the Real-Time Protocol (RTP) [RFC3550] timestamp of the media stream through the Real-Time Control Protocol (RTCP) [RFC3550].
Burnett & Shanmugham Standards Track [Page 53]
RFC 6787 MRCPv2 November 2012
Markers are bookmarks that are defined within the markup. Most speech markup formats provide mechanisms to embed marker fields within speech texts. The synthesizer generates SPEECH-MARKER events when it reaches these marker fields. This header field MUST be part of the SPEECH-MARKER event and contain the marker tag value after the timestamp, separated by a semicolon. In these events, the timestamp marks the time the text corresponding to the marker was emitted as speech by the synthesizer.
This header field MUST also be returned in responses to STOP, CONTROL, and BARGE-IN-OCCURRED methods, in the SPEAK-COMPLETE event, and in an IN-PROGRESS SPEAK response. In these messages, if any markers have been encountered for the current SPEAK, the marker tag value MUST be the last embedded marker encountered. If no markers have yet been encountered for the current SPEAK, only the timestamp is REQUIRED. Note that in these events, the purpose of this header field is to provide timestamp information associated with important events within the lifecycle of a request (start of SPEAK processing, end of SPEAK processing, receipt of CONTROL/STOP/BARGE-IN-OCCURRED).
This header field specifies the default language of the speech data if the language is not specified in the markup. The value of this header field MUST follow RFC 5646 [RFC5646] for its values. The header field MAY occur in SPEAK, SET-PARAMS, or GET-PARAMS requests.
When the synthesizer needs to fetch documents or other resources like speech markup or audio files, this header field controls the corresponding URI access properties. This provides client policy on when the synthesizer should retrieve content from the server. A value of "prefetch" indicates the content MAY be downloaded when the request is received, whereas "safe" indicates that content MUST NOT
Burnett & Shanmugham Standards Track [Page 54]
RFC 6787 MRCPv2 November 2012
be downloaded until actually referenced. The default value is "prefetch". This header field MAY occur in SPEAK, SET-PARAMS, or GET-PARAMS requests.
When the synthesizer needs to fetch documents or other resources like speech audio files, this header field controls the corresponding URI access properties. This provides client policy whether or not the synthesizer is permitted to attempt to optimize speech by pre- fetching audio. The value is either "safe" to say that audio is only fetched when it is referenced, never before; "prefetch" to permit, but not require the implementation to pre-fetch the audio; or "stream" to allow it to stream the audio fetches. The default value is "prefetch". This header field MAY occur in SPEAK, SET-PARAMS, or GET-PARAMS requests.
When a synthesizer method needs a synthesizer to fetch or access a URI and the access fails, the server SHOULD provide the failed URI in this header field in the method response, unless there are multiple URI failures, in which case the server MUST provide one of the failed URIs in this header field in the method response.
When a synthesizer method needs a synthesizer to fetch or access a URI and the access fails, the server MUST provide the URI-specific or protocol-specific response code for the URI in the Failed-URI header field in the method response through this header field. The value encoding is UTF-8 (RFC 3629 [RFC3629]) to accommodate any access protocol -- some access protocols might have a response string instead of a numeric response code.
When a client issues a CONTROL request to a currently speaking synthesizer resource to jump backward, and the target jump point is before the start of the current SPEAK request, the current SPEAK request MUST restart from the beginning of its speech data and the server's response to the CONTROL request MUST contain this header field with a value of "true" indicating a restart.
This header field MAY be specified in a CONTROL method to control the maximum length of speech to speak, relative to the current speaking point in the currently active SPEAK request. If numeric, the value MUST be a positive integer. If a header field with a Tag unit is specified, then the speech output continues until the tag is reached or the SPEAK request is completed, whichever comes first. This header field MAY be specified in a SPEAK request to indicate the length to speak from the speech data and is relative to the point in speech that the SPEAK request starts. The different speech length units supported are synthesizer implementation dependent. If a server does not support the specified unit, the server MUST respond with a status-code of 409 "Unsupported Header Field Value".
This header field is used to indicate whether a lexicon has to be loaded or unloaded. The value "true" means to load the lexicon if not already loaded, and the value "false" means to unload the lexicon if it is loaded. The default value for this header field is "true". This header field MAY be specified in a DEFINE-LEXICON method.
This header field is used to specify a list of active pronunciation lexicon URIs and the search order among the active lexicons. Lexicons specified within the SSML document take precedence over the lexicons specified in this header field. This header field MAY be specified in the SPEAK, SET-PARAMS, and GET-PARAMS methods.
Marked-up text for the synthesizer to speak is specified as a typed media entity in the message body. The speech data to be spoken by the synthesizer can be specified inline by embedding the data in the message body or by reference by providing a URI for accessing the data. In either case, the data and the format used to markup the speech needs to be of a content type supported by the server.
All MRCPv2 servers containing synthesizer resources MUST support both plain text speech data and W3C's Speech Synthesis Markup Language [W3C.REC-speech-synthesis-20040907] and hence MUST support the media types 'text/plain' and 'application/ssml+xml'. Other formats MAY be supported.
If the speech data is to be fetched by URI reference, the media type 'text/uri-list' (see RFC 2483 [RFC2483]) is used to indicate one or more URIs that, when dereferenced, will contain the content to be spoken. If a list of speech URIs is specified, the resource MUST speak the speech data provided by each URI in the order in which the URIs are specified in the content.
Burnett & Shanmugham Standards Track [Page 57]
RFC 6787 MRCPv2 November 2012
MRCPv2 clients and servers MUST support the 'multipart/mixed' media type. This is the appropriate media type to use when providing a mix of URI and inline speech data. Embedded within the multipart content block, there MAY be content for the 'text/uri-list', 'application/ ssml+xml', and/or 'text/plain' media types. The character set and encoding used in the speech data is specified according to standard media type definitions. The multipart content MAY also contain actual audio data. Clients may have recorded audio clips stored in memory or on a local device and wish to play it as part of the SPEAK request. The audio portions MAY be sent by the client as part of the multipart content block. This audio is referenced in the speech markup data that is another part in the multipart content block according to the 'multipart/mixed' media type specification.
Synthesizer lexicon data from the client to the server can be provided inline or by reference. Either way, they are carried as typed media in the message body of the MRCPv2 request message (see Section 8.14).
When a lexicon is specified inline in the message, the client MUST provide a Content-ID for that lexicon as part of the content header fields. The server MUST store the lexicon associated with that Content-ID for the duration of the session. A stored lexicon can be overwritten by defining a new lexicon with the same Content-ID.
Burnett & Shanmugham Standards Track [Page 59]
RFC 6787 MRCPv2 November 2012
Lexicons that have been associated with a Content-ID can be referenced through the 'session' URI scheme (see Section 13.6).
If lexicon data is specified by external URI reference, the media type 'text/uri-list' (see RFC 2483 [RFC2483] ) is used to list the one or more URIs that may be dereferenced to obtain the lexicon data. All MRCPv2 servers MUST support the "http" and "https" URI access mechanisms, and MAY support other mechanisms.
If the data in the message body consists of a mix of URI and inline lexicon data, the 'multipart/mixed' media type is used. The character set and encoding used in the lexicon data may be specified according to standard media type definitions.
The SPEAK request provides the synthesizer resource with the speech text and initiates speech synthesis and streaming. The SPEAK method MAY carry voice and prosody header fields that alter the behavior of the voice being synthesized, as well as a typed media message body containing the actual marked-up text to be spoken.
The SPEAK method implementation MUST do a fetch of all external URIs that are part of that operation. If caching is implemented, this URI fetching MUST conform to the cache-control hints and parameter header fields associated with the method in deciding whether it is to be fetched from cache or from the external server. If these hints/ parameters are not specified in the method, the values set for the session using SET-PARAMS/GET-PARAMS apply. If it was not set for the session, their default values apply.
When applying voice parameters, there are three levels of precedence. The highest precedence are those specified within the speech markup text, followed by those specified in the header fields of the SPEAK request and hence that apply for that SPEAK request only, followed by the session default values that can be set using the SET-PARAMS request and apply for subsequent methods invoked during the session.
If the resource was idle at the time the SPEAK request arrived at the server and the SPEAK method is being actively processed, the resource responds immediately with a success status code and a request-state of IN-PROGRESS.
If the resource is in the speaking or paused state when the SPEAK method arrives at the server, i.e., it is in the middle of processing a previous SPEAK request, the status returns success with a request- state of PENDING. The server places the SPEAK request in the synthesizer resource request queue. The request queue operates
Burnett & Shanmugham Standards Track [Page 60]
RFC 6787 MRCPv2 November 2012
strictly FIFO: requests are processed serially in order of receipt. If the current SPEAK fails, all SPEAK methods in the pending queue are cancelled and each generates a SPEAK-COMPLETE event with a Completion-Cause of "cancelled".
For the synthesizer resource, SPEAK is the only method that can return a request-state of IN-PROGRESS or PENDING. When the text has been synthesized and played into the media stream, the resource issues a SPEAK-COMPLETE event with the request-id of the SPEAK request and a request-state of COMPLETE.
The STOP method from the client to the server tells the synthesizer resource to stop speaking if it is speaking something.
The STOP request can be sent with an Active-Request-Id-List header field to stop the zero or more specific SPEAK requests that may be in queue and return a response status-code of 200 "Success". If no Active-Request-Id-List header field is sent in the STOP request, the server terminates all outstanding SPEAK requests.
If a STOP request successfully terminated one or more PENDING or IN-PROGRESS SPEAK requests, then the response MUST contain an Active- Request-Id-List header field enumerating the SPEAK request-ids that were terminated. Otherwise, there is no Active-Request-Id-List header field in the response. No SPEAK-COMPLETE events are sent for such terminated requests.
If a SPEAK request that was IN-PROGRESS and speaking was stopped, the next pending SPEAK request, if any, becomes IN-PROGRESS at the resource and enters the speaking state.
If a SPEAK request that was IN-PROGRESS and paused was stopped, the next pending SPEAK request, if any, becomes IN-PROGRESS and enters the paused state.
The BARGE-IN-OCCURRED method, when used with the synthesizer resource, provides a client that has detected a barge-in-able event a means to communicate the occurrence of the event to the synthesizer resource.
This method is useful in two scenarios:
1. The client has detected DTMF digits in the input media or some other barge-in-able event and wants to communicate that to the synthesizer resource.
2. The recognizer resource and the synthesizer resource are in different servers. In this case, the client acts as an intermediary for the two servers. It receives an event from the recognition resource and sends a BARGE-IN-OCCURRED request to the synthesizer. In such cases, the BARGE-IN-OCCURRED method would also have a Proxy-Sync-Id header field received from the resource generating the original event.
If a SPEAK request is active with kill-on-barge-in enabled (see Section 8.4.2), and the BARGE-IN-OCCURRED event is received, the synthesizer MUST immediately stop streaming out audio. It MUST also terminate any speech requests queued behind the current active one, irrespective of whether or not they have barge-in enabled. If a barge-in-able SPEAK request was playing and it was terminated, the response MUST contain an Active-Request-Id-List header field listing the request-ids of all SPEAK requests that were terminated. The server generates no SPEAK-COMPLETE events for these requests.
Burnett & Shanmugham Standards Track [Page 63]
RFC 6787 MRCPv2 November 2012
If there were no SPEAK requests terminated by the synthesizer resource as a result of the BARGE-IN-OCCURRED method, the server MUST respond to the BARGE-IN-OCCURRED with a status-code of 200 "Success", and the response MUST NOT contain an Active-Request-Id-List header field.
If the synthesizer and recognizer resources are part of the same MRCPv2 session, they can be optimized for a quicker kill-on-barge-in response if the recognizer and synthesizer interact directly. In these cases, the client MUST still react to a START-OF-INPUT event from the recognizer by invoking the BARGE-IN-OCCURRED method to the synthesizer. The client MUST invoke the BARGE-IN-OCCURRED if it has any outstanding requests to the synthesizer resource in either the PENDING or IN-PROGRESS state.
The PAUSE method from the client to the server tells the synthesizer resource to pause speech output if it is speaking something. If a PAUSE method is issued on a session when a SPEAK is not active, the server MUST respond with a status-code of 402 "Method not valid in this state". If a PAUSE method is issued on a session when a SPEAK is active and paused, the server MUST respond with a status-code of 200 "Success". If a SPEAK request was active, the server MUST return an Active-Request-Id-List header field whose value contains the request-id of the SPEAK request that was paused.
The RESUME method from the client to the server tells a paused synthesizer resource to resume speaking. If a RESUME request is issued on a session with no active SPEAK request, the server MUST respond with a status-code of 402 "Method not valid in this state". If a RESUME request is issued on a session with an active SPEAK request that is speaking (i.e., not paused), the server MUST respond with a status-code of 200 "Success". If a SPEAK request was paused, the server MUST return an Active-Request-Id-List header field whose value contains the request-id of the SPEAK request that was resumed.
The CONTROL method from the client to the server tells a synthesizer that is speaking to modify what it is speaking on the fly. This method is used to request the synthesizer to jump forward or backward in what it is speaking, change speaker rate, speaker parameters, etc. It affects only the currently IN-PROGRESS SPEAK request. Depending on the implementation and capability of the synthesizer resource, it may or may not support the various modifications indicated by header fields in the CONTROL request.
When a client invokes a CONTROL method to jump forward and the operation goes beyond the end of the active SPEAK method's text, the CONTROL request still succeeds. The active SPEAK request completes and returns a SPEAK-COMPLETE event following the response to the CONTROL method. If there are more SPEAK requests in the queue, the synthesizer resource starts at the beginning of the next SPEAK request in the queue.
When a client invokes a CONTROL method to jump backward and the operation jumps to the beginning or beyond the beginning of the speech data of the active SPEAK method, the CONTROL request still succeeds. The response to the CONTROL request contains the speak- restart header field, and the active SPEAK request restarts from the beginning of its speech data.
Burnett & Shanmugham Standards Track [Page 67]
RFC 6787 MRCPv2 November 2012
These two behaviors can be used to rewind or fast-forward across multiple speech requests, if the client wants to break up a speech markup text into multiple SPEAK requests.
If a SPEAK request was active when the CONTROL method was received, the server MUST return an Active-Request-Id-List header field containing the request-id of the SPEAK request that was active.
This is an Event message from the synthesizer resource to the client that indicates the corresponding SPEAK request was completed. The request-id field matches the request-id of the SPEAK request that initiated the speech that just completed. The request-state field is set to COMPLETE by the server, indicating that this is the last event with the corresponding request-id. The Completion-Cause header field specifies the cause code pertaining to the status and reason of request completion, such as the SPEAK completed normally or because of an error, kill-on-barge-in, etc.
This is an event generated by the synthesizer resource to the client when the synthesizer encounters a marker tag in the speech markup it is currently processing. The value of the request-id field MUST match that of the corresponding SPEAK request. The request-state field MUST have the value "IN-PROGRESS" as the speech is still not complete. The value of the speech marker tag hit, describing where the synthesizer is in the speech markup, MUST be returned in the Speech-Marker header field, along with an NTP timestamp indicating the instant in the output speech stream that the marker was encountered. The SPEECH-MARKER event MUST also be generated with a null marker value and output NTP timestamp when a SPEAK request in Pending-State (i.e., in the queue) changes state to IN-PROGRESS and starts speaking. The NTP timestamp MUST be synchronized with the RTP timestamp used to generate the speech stream through standard RTCP machinery.
The DEFINE-LEXICON method, from the client to the server, provides a lexicon and tells the server to load or unload the lexicon (see Section 8.4.16). The media type of the lexicon is provided in the Content-Type header (see Section 8.5.2). One such media type is "application/pls+xml" for the Pronunciation Lexicon Specification (PLS) [W3C.REC-pronunciation-lexicon-20081014] [RFC4267].
If the server resource is in the speaking or paused state, the server MUST respond with a failure status-code of 402 "Method not valid in this state".
If the resource is in the idle state and is able to successfully load/unload the lexicon, the status MUST return a 200 "Success" status-code and the request-state MUST be COMPLETE.
Burnett & Shanmugham Standards Track [Page 71]
RFC 6787 MRCPv2 November 2012
If the synthesizer could not define the lexicon for some reason, for example, because the download failed or the lexicon was in an unsupported form, the server MUST respond with a failure status-code of 407 and a Completion-Cause header field describing the failure reason.
The speech recognizer resource receives an incoming voice stream and provides the client with an interpretation of what was spoken in textual form.
The recognizer resource is controlled by MRCPv2 requests from the client. The recognizer resource can both respond to these requests and generate asynchronous events to the client to indicate conditions of interest during the processing of the method.
This section applies to the following resource types.
1. speechrecog
2. dtmfrecog
The difference between the above two resources is in their level of support for recognition grammars. The "dtmfrecog" resource type is capable of recognizing only DTMF digits and hence accepts only DTMF grammars. It only generates barge-in for DTMF inputs and ignores speech. The "speechrecog" resource type can recognize regular speech as well as DTMF digits and hence MUST support grammars describing either speech or DTMF. This resource generates barge-in events for speech and/or DTMF. By analyzing the grammars that are activated by the RECOGNIZE method, it determines if a barge-in should occur for speech and/or DTMF. When the recognizer decides it needs to generate a barge-in, it also generates a START-OF-INPUT event to the client. The recognizer resource MAY support recognition in the normal or hotword modes or both (although note that a single "speechrecog" resource does not perform normal and hotword mode recognition simultaneously). For implementations where a single recognizer resource does not support both modes, or simultaneous normal and hotword recognition is desired, the two modes can be invoked through separate resources allocated to the same SIP dialog (with different MRCP session identifiers) and share the RTP audio feed.
The capabilities of the recognizer resource are enumerated below:
Normal Mode Recognition Normal mode recognition tries to match all of the speech or DTMF against the grammar and returns a no-match status if the input fails to match or the method times out.
Burnett & Shanmugham Standards Track [Page 72]
RFC 6787 MRCPv2 November 2012
Hotword Mode Recognition Hotword mode is where the recognizer looks for a match against specific speech grammar or DTMF sequence and ignores speech or DTMF that does not match. The recognition completes only if there is a successful match of grammar, if the client cancels the request, or if there is a non-input or recognition timeout.
Voice Enrolled Grammars A recognizer resource MAY optionally support Voice Enrolled Grammars. With this functionality, enrollment is performed using a person's voice. For example, a list of contacts can be created and maintained by recording the person's names using the caller's voice. This technique is sometimes also called speaker-dependent recognition.
Interpretation A recognizer resource MAY be employed strictly for its natural language interpretation capabilities by supplying it with a text string as input instead of speech. In this mode, the resource takes text as input and produces an "interpretation" of the input according to the supplied grammar.
Voice enrollment has the concept of an enrollment session. A session to add a new phrase to a personal grammar involves the initial enrollment followed by a repeat of enough utterances before committing the new phrase to the personal grammar. Each time an utterance is recorded, it is compared for similarity with the other samples and a clash test is performed against other entries in the personal grammar to ensure there are no similar and confusable entries.
Enrollment is done using a recognizer resource. Controlling which utterances are to be considered for enrollment of a new phrase is done by setting a header field (see Section 9.4.39) in the Recognize request.
Interpretation is accomplished through the INTERPRET method (Section 9.20) and the Interpret-Text header field (Section 9.4.30).
If a recognizer resource supports voice enrolled grammars, starting an enrollment session does not change the state of the recognizer resource. Once an enrollment session is started, then utterances are enrolled by calling the RECOGNIZE method repeatedly. The state of the speech recognizer resource goes from IDLE to RECOGNIZING state each time RECOGNIZE is called.
It is OPTIONAL for a recognizer resource to support voice enrolled grammars. If the recognizer resource does support voice enrolled grammars, it MUST support the following methods.
A recognizer message can contain header fields containing request options and information to augment the Method, Response, or Event message it is associated with.
For enrollment-specific header fields that can appear as part of SET-PARAMS or GET-PARAMS methods, the following general rule applies: the START-PHRASE-ENROLLMENT method MUST be invoked before these header fields may be set through the SET-PARAMS method or retrieved through the GET-PARAMS method.
Note that the Waveform-URI header field of the Recognizer resource can also appear in the response to the END-PHRASE-ENROLLMENT method.
When a recognizer resource recognizes or matches a spoken phrase with some portion of the grammar, it associates a confidence level with that match. The Confidence-Threshold header field tells the recognizer resource what confidence level the client considers a successful match. This is a float value between 0.0-1.0 indicating the recognizer's confidence in the recognition. If the recognizer determines that there is no candidate match with a confidence that is greater than the confidence threshold, then it MUST return no-match as the recognition result. This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS. The default value for this header field is implementation specific, as is the interpretation of any specific value for this header field. Although values for servers from different vendors are not comparable, it is expected that clients will tune this value over time for a given server.
To filter out background noise and not mistake it for speech, the recognizer resource supports a variable level of sound sensitivity. The Sensitivity-Level header field is a float value between 0.0 and 1.0 and allows the client to set the sensitivity level for the recognizer. This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS. A higher value for this header field means higher sensitivity. The default value for this header field is implementation specific, as is the interpretation of any specific value for this header field. Although values for servers from different vendors are not comparable, it is expected that clients will tune this value over time for a given server.
Depending on the implementation and capability of the recognizer resource it may be tunable towards Performance or Accuracy. Higher accuracy may mean more processing and higher CPU utilization, meaning fewer active sessions per server and vice versa. The value is a float between 0.0 and 1.0. A value of 0.0 means fastest recognition. A value of 1.0 means best accuracy. This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS. The default value for this
Burnett & Shanmugham Standards Track [Page 77]
RFC 6787 MRCPv2 November 2012
header field is implementation specific. Although values for servers from different vendors are not comparable, it is expected that clients will tune this value over time for a given server.
When the recognizer matches an incoming stream with the grammar, it may come up with more than one alternative match because of confidence levels in certain words or conversation paths. If this header field is not specified, by default, the recognizer resource returns only the best match above the confidence threshold. The client, by setting this header field, can ask the recognition resource to send it more than one alternative. All alternatives must still be above the Confidence-Threshold. A value greater than one does not guarantee that the recognizer will provide the requested number of alternatives. This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS. The minimum value for this header field is 1. The default value for this header field is 1.
When the recognizer detects barge-in-able input and generates a START-OF-INPUT event, that event MUST carry this header field to specify whether the input that caused the barge-in was DTMF or speech.
When recognition is started and there is no speech detected for a certain period of time, the recognizer can send a RECOGNITION- COMPLETE event to the client with a Completion-Cause of "no-input- timeout" and terminate the recognition operation. The client can use the No-Input-Timeout header field to set this timeout. The value is in milliseconds and can range from 0 to an implementation-specific maximum value. This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS. The default value is implementation specific.
When recognition is started and there is no match for a certain period of time, the recognizer can send a RECOGNITION-COMPLETE event to the client and terminate the recognition operation. The Recognition-Timeout header field allows the client to set this timeout value. The value is in milliseconds. The value for this header field ranges from 0 to an implementation-specific maximum value. The default value is 10 seconds. This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS.
If the Save-Waveform header field is set to "true", the recognizer MUST record the incoming audio stream of the recognition into a stored form and provide a URI for the client to access it. This header field MUST be present in the RECOGNITION-COMPLETE event if the Save-Waveform header field was set to "true". The value of the header field MUST be empty if there was some error condition preventing the server from recording. Otherwise, the URI generated by the server MUST be unambiguous across the server and all its recognition sessions. The content associated with the URI MUST be available to the client until the MRCPv2 session terminates.
Similarly, if the Save-Best-Waveform header field is set to "true", the recognizer MUST save the audio stream for the best repetition of the phrase that was used during the enrollment session. The recognizer MUST then record the recognized audio and make it available to the client by returning a URI in the Waveform-URI header field in the response to the END-PHRASE-ENROLLMENT method. The value of the header field MUST be empty if there was some error condition preventing the server from recording. Otherwise, the URI generated by the server MUST be unambiguous across the server and all its recognition sessions. The content associated with the URI MUST be available to the client until the MRCPv2 session terminates. See the discussion on the sensitivity of saved waveforms in Section 12.
The server MUST also return the size in octets and the duration in milliseconds of the recorded audio waveform as parameters associated with the header field.
This header field MAY be specified in the SET-PARAMS, GET-PARAMS, or the RECOGNIZE methods and tells the server resource the media type in which to store captured audio or video, such as the one captured and returned by the Waveform-URI header field.
This optional header field specifies a URI pointing to audio content to be processed by the RECOGNIZE operation. This enables the client to request recognition from a specified buffer or audio file.
input-waveform-uri = "Input-Waveform-URI" ":" uri CRLF
This header field MUST be part of a RECOGNITION-COMPLETE event coming from the recognizer resource to the client. It indicates the reason behind the RECOGNIZE method completion. This header field MUST be sent in the DEFINE-GRAMMAR and RECOGNIZE responses, if they return with a failure status and a COMPLETE state. In the ABNF below, the cause-code contains a numerical value selected from the Cause-Code column of the following table. The cause-name contains the corresponding token selected from the Cause-Name column.
+------------+-----------------------+------------------------------+ | Cause-Code | Cause-Name | Description | +------------+-----------------------+------------------------------+ | 000 | success | RECOGNIZE completed with a | | | | match or DEFINE-GRAMMAR | | | | succeeded in downloading and | | | | compiling the grammar. | | | | | | 001 | no-match | RECOGNIZE completed, but no | | | | match was found. | | | | | | 002 | no-input-timeout | RECOGNIZE completed without | | | | a match due to a | | | | no-input-timeout. | | | | | | 003 | hotword-maxtime | RECOGNIZE in hotword mode | | | | completed without a match | | | | due to a | | | | recognition-timeout. | | | | | | 004 | grammar-load-failure | RECOGNIZE failed due to | | | | grammar load failure. | | | | | | 005 | grammar-compilation- | RECOGNIZE failed due to | | | failure | grammar compilation failure. | | | | | | 006 | recognizer-error | RECOGNIZE request terminated | | | | prematurely due to a | | | | recognizer error. | | | | | | 007 | speech-too-early | RECOGNIZE request terminated | | | | because speech was too | | | | early. This happens when the | | | | audio stream is already | | | | "in-speech" when the | | | | RECOGNIZE request was | | | | received. | | | | | | 008 | success-maxtime | RECOGNIZE request terminated | | | | because speech was too long | | | | but whatever was spoken till | | | | that point was a full match. | | | | | | 009 | uri-failure | Failure accessing a URI. | | | | | | 010 | language-unsupported | Language not supported. | | | | |
Burnett & Shanmugham Standards Track [Page 81]
RFC 6787 MRCPv2 November 2012
| 011 | cancelled | A new RECOGNIZE cancelled | | | | this one, or a prior | | | | RECOGNIZE failed while this | | | | one was still in the queue. | | | | | | 012 | semantics-failure | Recognition succeeded, but | | | | semantic interpretation of | | | | the recognized input failed. | | | | The RECOGNITION-COMPLETE | | | | event MUST contain the | | | | Recognition result with only | | | | input text and no | | | | interpretation. | | | | | | 013 | partial-match | Speech Incomplete Timeout | | | | expired before there was a | | | | full match. But whatever was | | | | spoken till that point was a | | | | partial match to one or more | | | | grammars. | | | | | | 014 | partial-match-maxtime | The Recognition-Timeout | | | | expired before full match | | | | was achieved. But whatever | | | | was spoken till that point | | | | was a partial match to one | | | | or more grammars. | | | | | | 015 | no-match-maxtime | The Recognition-Timeout | | | | expired. Whatever was spoken | | | | till that point did not | | | | match any of the grammars. | | | | This cause could also be | | | | returned if the recognizer | | | | does not support detecting | | | | partial grammar matches. | | | | | | 016 | grammar-definition- | Any DEFINE-GRAMMAR error | | | failure | other than | | | | grammar-load-failure and | | | | grammar-compilation-failure. | +------------+-----------------------+------------------------------+
This header field MAY be specified in a RECOGNITION-COMPLETE event coming from the recognizer resource to the client. This contains the reason text behind the RECOGNIZE request completion. The server uses this header field to communicate text describing the reason for the failure, such as the specific error encountered in parsing a grammar markup.
The completion reason text is provided for client use in logs and for debugging and instrumentation purposes. Clients MUST NOT interpret the completion reason text.
This header field MAY be sent as part of the SET-PARAMS or GET-PARAMS request. If the GET-PARAMS method contains this header field with no value, then it is a request to the recognizer to return the recognizer context block. The response to such a message MAY contain a recognizer context block as a typed media message body. If the server returns a recognizer context block, the response MUST contain this header field and its value MUST match the Content-ID of the corresponding media block.
If the SET-PARAMS method contains this header field, it MUST also contain a message body containing the recognizer context data and a Content-ID matching this header field value. This Content-ID MUST match the Content-ID that came with the context data during the GET-PARAMS operation.
An implementation choosing to use this mechanism to hand off recognizer context data between servers MUST distinguish its implementation-specific block of data by using an IANA-registered content type in the IANA Media Type vendor tree.
This header field MAY be sent as part of the RECOGNIZE request. A value of false tells the recognizer to start recognition but not to start the no-input timer yet. The recognizer MUST NOT start the timers until the client sends a START-INPUT-TIMERS request to the recognizer. This is useful in the scenario when the recognizer and
Burnett & Shanmugham Standards Track [Page 83]
RFC 6787 MRCPv2 November 2012
synthesizer engines are not part of the same session. In such configurations, when a kill-on-barge-in prompt is being played (see Section 8.4.2), the client wants the RECOGNIZE request to be simultaneously active so that it can detect and implement kill-on- barge-in. However, the recognizer SHOULD NOT start the no-input timers until the prompt is finished. The default value is "true".
This header field specifies the length of silence required following user speech before the speech recognizer finalizes a result (either accepting it or generating a no-match result). The Speech-Complete- Timeout value applies when the recognizer currently has a complete match against an active grammar, and specifies how long the recognizer MUST wait for more input before declaring a match. By contrast, the Speech-Incomplete-Timeout is used when the speech is an incomplete match to an active grammar. The value is in milliseconds.
A long Speech-Complete-Timeout value delays the result to the client and therefore makes the application's response to a user slow. A short Speech-Complete-Timeout may lead to an utterance being broken up inappropriately. Reasonable speech complete timeout values are typically in the range of 0.3 seconds to 1.0 seconds. The value for this header field ranges from 0 to an implementation-specific maximum value. The default value for this header field is implementation specific. This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS.
This header field specifies the required length of silence following user speech after which a recognizer finalizes a result. The incomplete timeout applies when the speech prior to the silence is an incomplete match of all active grammars. In this case, once the timeout is triggered, the partial result is rejected (with a Completion-Cause of "partial-match"). The value is in milliseconds. The value for this header field ranges from 0 to an implementation- specific maximum value. The default value for this header field is implementation specific.
The Speech-Incomplete-Timeout also applies when the speech prior to the silence is a complete match of an active grammar, but where it is possible to speak further and still match the grammar. By contrast, the Speech-Complete-Timeout is used when the speech is a complete match to an active grammar and no further spoken words can continue to represent a match.
A long Speech-Incomplete-Timeout value delays the result to the client and therefore makes the application's response to a user slow. A short Speech-Incomplete-Timeout may lead to an utterance being broken up inappropriately.
The Speech-Incomplete-Timeout is usually longer than the Speech- Complete-Timeout to allow users to pause mid-utterance (for example, to breathe). This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS.
This header field specifies the inter-digit timeout value to use when recognizing DTMF input. The value is in milliseconds. The value for this header field ranges from 0 to an implementation-specific maximum value. The default value is 5 seconds. This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS.
This header field specifies the terminating timeout to use when recognizing DTMF input. The DTMF-Term-Timeout applies only when no additional input is allowed by the grammar; otherwise, the DTMF-Interdigit-Timeout applies. The value is in milliseconds. The value for this header field ranges from 0 to an implementation- specific maximum value. The default value is 10 seconds. This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS.
This header field specifies the terminating DTMF character for DTMF input recognition. The default value is NULL, which is indicated by an empty header field value. This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS.
When a recognizer needs to fetch or access a URI and the access fails, the server SHOULD provide the failed URI in this header field in the method response, unless there are multiple URI failures, in which case one of the failed URIs MUST be provided in this header field in the method response.
When a recognizer method needs a recognizer to fetch or access a URI and the access fails, the server MUST provide the URI-specific or protocol-specific response code for the URI in the Failed-URI header field through this header field in the method response. The value encoding is UTF-8 (RFC 3629 [RFC3629]) to accommodate any access protocol, some of which might have a response string instead of a numeric response code.
This header field allows the client to request the recognizer resource to save the audio input to the recognizer. The recognizer resource MUST then attempt to record the recognized audio, without endpointing, and make it available to the client in the form of a URI returned in the Waveform-URI header field in the RECOGNITION-COMPLETE event. If there was an error in recording the stream or the audio content is otherwise not available, the recognizer MUST return an empty Waveform-URI header field. The default value for this field is "false". This header field MAY occur in RECOGNIZE, SET-PARAMS, or GET-PARAMS. See the discussion on the sensitivity of saved waveforms in Section 12.
This header field MAY be specified in a RECOGNIZE request and allows the client to tell the server that, from this point on, further input audio comes from a different audio source, channel, or speaker. If the recognizer resource had collected any input statistics or adaptation state, the recognizer resource MUST do what is appropriate for the specific recognition technology, which includes but is not limited to discarding any collected input statistics or adaptation state before starting the RECOGNIZE request. Note that if there are
Burnett & Shanmugham Standards Track [Page 86]
RFC 6787 MRCPv2 November 2012
multiple resources that are sharing a media stream and are collecting or using this data, and the client issues this header field to one of the resources, the reset operation applies to all resources that use the shared media stream. This helps in a number of use cases, including where the client wishes to reuse an open recognition session with an existing media session for multiple telephone calls.
This header field specifies the language of recognition grammar data within a session or request, if it is not specified within the data. The value of this header field MUST follow RFC 5646 [RFC5646] for its values. This MAY occur in DEFINE-GRAMMAR, RECOGNIZE, SET-PARAMS, or GET-PARAMS requests.
This header field lets the client request the server to buffer the utterance associated with this recognition request into a buffer available to a co-resident verifier resource. The buffer is shared across resources within a session and is allocated when a verifier resource is added to this session. The client MUST NOT send this header field unless a verifier resource is instantiated for the session. The buffer is released when the verifier resource is released from the session.
This header field specifies what mode the RECOGNIZE method will operate in. The value choices are "normal" or "hotword". If the value is "normal", the RECOGNIZE starts matching speech and DTMF to the grammars specified in the RECOGNIZE request. If any portion of the speech does not match the grammar, the RECOGNIZE command completes with a no-match status. Timers may be active to detect speech in the audio (see Section 9.4.14), so the RECOGNIZE method may complete because of a timeout waiting for speech. If the value of this header field is "hotword", the RECOGNIZE method operates in hotword mode, where it only looks for the particular keywords or DTMF
Burnett & Shanmugham Standards Track [Page 87]
RFC 6787 MRCPv2 November 2012
sequences specified in the grammar and ignores silence or other speech in the audio stream. The default value for this header field is "normal". This header field MAY occur on the RECOGNIZE method.
This header field specifies what will happen if the client attempts to invoke another RECOGNIZE method when this RECOGNIZE request is already in progress for the resource. The value for this header field is a Boolean. A value of "true" means the server MUST terminate this RECOGNIZE request, with a Completion-Cause of "cancelled", if the client issues another RECOGNIZE request for the same resource. A value of "false" for this header field indicates to the server that this RECOGNIZE request will continue to completion, and if the client issues more RECOGNIZE requests to the same resource, they are queued. When the currently active RECOGNIZE request is stopped or completes with a successful match, the first RECOGNIZE method in the queue becomes active. If the current RECOGNIZE fails, all RECOGNIZE methods in the pending queue are cancelled, and each generates a RECOGNITION-COMPLETE event with a Completion-Cause of "cancelled". This header field MUST be present in every RECOGNIZE request. There is no default value.
This header field MAY be sent in a hotword mode RECOGNIZE request. It specifies the maximum length of an utterance (in seconds) that will be considered for hotword recognition. This header field, along with Hotword-Min-Duration, can be used to tune performance by preventing the recognizer from evaluating utterances that are too short or too long to be one of the hotwords in the grammar(s). The value is in milliseconds. The default is implementation dependent. If present in a RECOGNIZE request specifying a mode other than "hotword", the header field is ignored.
This header field MAY be sent in a hotword mode RECOGNIZE request. It specifies the minimum length of an utterance (in seconds) that will be considered for hotword recognition. This header field, along
Burnett & Shanmugham Standards Track [Page 88]
RFC 6787 MRCPv2 November 2012
with Hotword-Max-Duration, can be used to tune performance by preventing the recognizer from evaluating utterances that are too short or too long to be one of the hotwords in the grammar(s). The value is in milliseconds. The default value is implementation dependent. If present in a RECOGNIZE request specifying a mode other than "hotword", the header field is ignored.
The value of this header field is used to provide a pointer to the text for which a natural language interpretation is desired. The value is either a URI or text. If the value is a URI, it MUST be a Content-ID that refers to an entity of type 'text/plain' in the body of the message. Otherwise, the server MUST treat the value as the text to be interpreted. This header field MUST be used when invoking the INTERPRET method.
This header field MAY be specified in a GET-PARAMS or SET-PARAMS method and is used to specify the amount of time, in milliseconds, of the type-ahead buffer for the recognizer. This is the buffer that collects DTMF digits as they are pressed even when there is no RECOGNIZE command active. When a subsequent RECOGNIZE method is received, it MUST look to this buffer to match the RECOGNIZE request. If the digits in the buffer are not sufficient, then it can continue to listen to more digits to match the grammar. The default size of this DTMF buffer is platform specific.
This header field MAY be specified in a RECOGNIZE method and is used to tell the recognizer to clear the DTMF type-ahead buffer before starting the RECOGNIZE. The default value of this header field is "false", which does not clear the type-ahead buffer before starting the RECOGNIZE method. If this header field is specified to be "true", then the RECOGNIZE will clear the DTMF buffer before starting recognition. This means digits pressed by the caller before the RECOGNIZE command was issued are discarded.
This header field MAY be specified in a RECOGNIZE method and is used to tell the recognizer that it MUST NOT wait for the end of speech before processing the collected speech to match active grammars. A value of "true" indicates the recognizer MUST do early matching. The default value for this header field if not specified is "false". If the recognizer does not support the processing of the collected audio before the end of speech, this header field can be safely ignored.
This header field MAY be specified in a START-PHRASE-ENROLLMENT, SET-PARAMS, or GET-PARAMS method and is used to specify the minimum number of consistent pronunciations that must be obtained to voice enroll a new phrase. The minimum value is 1. The default value is implementation specific and MAY be greater than 1.
This header field MAY be sent as part of the START-PHRASE-ENROLLMENT, SET-PARAMS, or GET-PARAMS method. Used during voice enrollment, this header field specifies how similar to a previously enrolled pronunciation of the same phrase an utterance needs to be in order to be considered "consistent". The higher the threshold, the closer the match between an utterance and previous pronunciations must be for the pronunciation to be considered consistent. The range for this threshold is a float value between 0.0 and 1.0. The default value for this header field is implementation specific.
This header field MAY be sent as part of the START-PHRASE-ENROLLMENT, SET-PARAMS, or GET-PARAMS method. Used during voice enrollment, this header field specifies how similar the pronunciations of two different phrases can be before they are considered to be clashing. For example, pronunciations of phrases such as "John Smith" and "Jon Smits" may be so similar that they are difficult to distinguish correctly. A smaller threshold reduces the number of clashes detected. The range for this threshold is a float value between 0.0
Burnett & Shanmugham Standards Track [Page 90]
RFC 6787 MRCPv2 November 2012
and 1.0. The default value for this header field is implementation specific. Clash testing can be turned off completely by setting the Clash-Threshold header field value to 0.
This header field specifies the speaker-trained grammar to be used or referenced during enrollment operations. Phrases are added to this grammar during enrollment. For example, a contact list for user "Jeff" could be stored at the Personal-Grammar-URI "http://myserver.example.com/myenrollmentdb/jeff-list". The generated grammar syntax MAY be implementation specific. There is no default value for this header field. This header field MAY be sent as part of the START-PHRASE-ENROLLMENT, SET-PARAMS, or GET-PARAMS method.
personal-grammar-uri = "Personal-Grammar-URI" ":" uri CRLF
This header field MAY be specified in the RECOGNIZE method. If this header field is set to "true" and an Enrollment is active, the RECOGNIZE command MUST add the collected utterance to the personal grammar that is being enrolled. The way in which this occurs is engine specific and may be an area of future standardization. The default value for this header field is "false".
This header field in a request identifies a phrase in an existing personal grammar for which enrollment is desired. It is also returned to the client in the RECOGNIZE complete event. This header field MAY occur in START-PHRASE-ENROLLMENT, MODIFY-PHRASE, or DELETE- PHRASE requests. There is no default value for this header field.
This string specifies the interpreted text to be returned when the phrase is recognized. This header field MAY occur in START-PHRASE- ENROLLMENT and MODIFY-PHRASE requests. There is no default value for this header field.
The value of this header field represents the occurrence likelihood of a phrase in an enrolled grammar. When using grammar enrollment, the system is essentially constructing a grammar segment consisting of a list of possible match phrases. This can be thought of to be similar to the dynamic construction of a <one-of> tag in the W3C grammar specification. Each enrolled-phrase becomes an item in the list that can be matched against spoken input similar to the <item> within a <one-of> list. This header field allows you to assign a weight to the phrase (i.e., <item> entry) in the <one-of> list that is enrolled. Grammar weights are normalized to a sum of one at grammar compilation time, so a weight value of 1 for each phrase in an enrolled grammar list indicates all items in that list have the same weight. This header field MAY occur in START-PHRASE-ENROLLMENT and MODIFY-PHRASE requests. The default value for this header field is implementation specific.
This header field allows the client to request the recognizer resource to save the audio stream for the best repetition of the phrase that was used during the enrollment session. The recognizer MUST attempt to record the recognized audio and make it available to the client in the form of a URI returned in the Waveform-URI header field in the response to the END-PHRASE-ENROLLMENT method. If there was an error in recording the stream or the audio data is otherwise not available, the recognizer MUST return an empty Waveform-URI header field. This header field MAY occur in the START-PHRASE- ENROLLMENT, SET-PARAMS, and GET-PARAMS methods.
This header field replaces the ID used to identify the phrase in a personal grammar. The recognizer returns the new ID when using an enrollment grammar. This header field MAY occur in MODIFY-PHRASE requests.
This header field specifies a grammar that defines invalid phrases for enrollment. For example, typical applications do not allow an enrolled phrase that is also a command word. This header field MAY occur in RECOGNIZE requests that are part of an enrollment session.
confusable-phrases-uri = "Confusable-Phrases-URI" ":" uri CRLF
This header field MAY be specified in the END-PHRASE-ENROLLMENT method to abort the phrase enrollment, rather than committing the phrase to the personal grammar.
A recognizer message can carry additional data associated with the request, response, or event. The client MAY provide the grammar to be recognized in DEFINE-GRAMMAR or RECOGNIZE requests. When one or more grammars are specified using the DEFINE-GRAMMAR method, the server MUST attempt to fetch, compile, and optimize the grammar before returning a response to the DEFINE-GRAMMAR method. A RECOGNIZE request MUST completely specify the grammars to be active during the recognition operation, except when the RECOGNIZE method is being used to enroll a grammar. During grammar enrollment, such grammars are OPTIONAL. The server resource sends the recognition results in the RECOGNITION-COMPLETE event and the GET-RESULT response. Grammars and recognition results are carried in the message body of the corresponding MRCPv2 messages.
Recognizer grammar data from the client to the server can be provided inline or by reference. Either way, grammar data is carried as typed media entities in the message body of the RECOGNIZE or DEFINE-GRAMMAR
Burnett & Shanmugham Standards Track [Page 93]
RFC 6787 MRCPv2 November 2012
request. All MRCPv2 servers MUST accept grammars in the XML form (media type 'application/srgs+xml') of the W3C's XML-based Speech Grammar Markup Format (SRGS) [W3C.REC-speech-grammar-20040316] and MAY accept grammars in other formats. Examples include but are not limited to:
o the ABNF form (media type 'application/srgs') of SRGS
o Sun's Java Speech Grammar Format (JSGF) [refs.javaSpeechGrammarFormat]
Additionally, MRCPv2 servers MAY support the Semantic Interpretation for Speech Recognition (SISR) [W3C.REC-semantic-interpretation-20070405] specification.
When a grammar is specified inline in the request, the client MUST provide a Content-ID for that grammar as part of the content header fields. If there is no space on the server to store the inline grammar, the request MUST return with a Completion-Cause code of 016 "grammar-definition-failure". Otherwise, the server MUST associate the inline grammar block with that Content-ID and MUST store it on the server for the duration of the session. However, if the Content-ID is redefined later in the session through a subsequent DEFINE-GRAMMAR, the inline grammar previously associated with the Content-ID MUST be freed. If the Content-ID is redefined through a subsequent DEFINE-GRAMMAR with an empty message body (i.e., no grammar definition), then in addition to freeing any grammar previously associated with the Content-ID, the server MUST clear all bindings and associations to the Content-ID. Unless and until subsequently redefined, this URI MUST be interpreted by the server as one that has never been set.
Grammars that have been associated with a Content-ID can be referenced through the 'session' URI scheme (see Section 13.6). For example: session:help@root-level.store
Grammar data MAY be specified using external URI references. To do so, the client uses a body of media type 'text/uri-list' (see RFC 2483 [RFC2483] ) to list the one or more URIs that point to the grammar data. The client can use a body of media type 'text/ grammar-ref-list' (see Section 13.5.1) if it wants to assign weights to the list of grammar URI. All MRCPv2 servers MUST support grammar access using the 'http' and 'https' URI schemes.
If the grammar data the client wishes to be used on a request consists of a mix of URI and inline grammar data, the client uses the 'multipart/mixed' media type to enclose the 'text/uri-list',
Burnett & Shanmugham Standards Track [Page 94]
RFC 6787 MRCPv2 November 2012
'application/srgs', or 'application/srgs+xml' content entities. The character set and encoding used in the grammar data are specified using to standard media type definitions.
When more than one grammar URI or inline grammar block is specified in a message body of the RECOGNIZE request, the server interprets this as a list of grammar alternatives to match against.
<!-- the default grammar language is US English --> <grammar xmlns="http://www.w3.org/2001/06/grammar" xml:lang="en-US" version="1.0" root="request">
<!-- single language attachment to tokens --> <rule id="yes"> <one-of> <item xml:lang="fr-CA">oui</item> <item xml:lang="en-US">yes</item> </one-of> </rule>
<!-- single language attachment to a rule expansion --> <rule id="request"> may I speak to <one-of xml:lang="fr-CA"> <item>Michel Tremblay</item> <item>Andre Roy</item> </one-of> </rule>
<!-- multiple language attachment to a token --> <rule id="people1"> <token lexicon="en-US,fr-CA"> Robert </token> </rule>
<!-- single language attachment to tokens --> <rule id="yes"> <one-of> <item xml:lang="fr-CA">oui</item> <item xml:lang="en-US">yes</item> </one-of> </rule>
<!-- single language attachment to a rule expansion --> <rule id="request"> may I speak to <one-of xml:lang="fr-CA"> <item>Michel Tremblay</item> <item>Andre Roy</item> </one-of> </rule>
<!-- multiple language attachment to a token --> <rule id="people1"> <token lexicon="en-US,fr-CA"> Robert </token> </rule>
Recognition results are returned to the client in the message body of the RECOGNITION-COMPLETE event or the GET-RESULT response message as described in Section 6.3. Element and attribute descriptions for the recognition portion of the NLSML format are provided in Section 9.6 with a normative definition of the schema in Section 16.1.
Enrollment results are returned to the client in the message body of the RECOGNITION-COMPLETE event as described in Section 6.3. Element and attribute descriptions for the enrollment portion of the NLSML format are provided in Section 9.7 with a normative definition of the schema in Section 16.2.
When a client changes servers while operating on the behalf of the same incoming communication session, this header field allows the client to collect a block of opaque data from one server and provide it to another server. This capability is desirable if the client needs different language support or because the server issued a redirect. Here, the first recognizer resource may have collected acoustic and other data during its execution of recognition methods. After a server switch, communicating this data may allow the recognizer resource on the new server to provide better recognition. This block of data is implementation specific and MUST be carried as media type 'application/octets' in the body of the message.
This block of data is communicated in the SET-PARAMS and GET-PARAMS method/response messages. In the GET-PARAMS method, if an empty Recognizer-Context-Block header field is present, then the recognizer SHOULD return its vendor-specific context block, if any, in the message body as an entity of media type 'application/octets' with a specific Content-ID. The Content-ID value MUST also be specified in the Recognizer-Context-Block header field in the GET-PARAMS response. The SET-PARAMS request wishing to provide this vendor-specific data MUST send it in the message body as a typed entity with the same
Burnett & Shanmugham Standards Track [Page 98]
RFC 6787 MRCPv2 November 2012
Content-ID that it received from the GET-PARAMS. The Content-ID MUST also be sent in the Recognizer-Context-Block header field of the SET-PARAMS message.
Each speech recognition implementation choosing to use this mechanism to hand off recognizer context data among servers MUST distinguish its implementation-specific block of data from other implementations by choosing a Content-ID that is recognizable among the participating servers and unlikely to collide with values chosen by another implementation.
The recognizer portion of NLSML (see Section 6.3.1) represents information automatically extracted from a user's utterances by a semantic interpretation component, where "utterance" is to be taken in the general sense of a meaningful user input in any modality supported by the MRCPv2 implementation.
MRCPv2 recognizer resources employ the Natural Language Semantics Markup Language (NLSML) to interpret natural language speech input and to format the interpretation for consumption by an MRCPv2 client.
The elements of the markup fall into the following general functional categories: interpretation, side information, and multi-modal integration.
Elements and attributes represent the semantics of a user's utterance, including the <result>, <interpretation>, and <instance> elements. The <result> element contains the full result of processing one utterance. It MAY contain multiple <interpretation> elements if the interpretation of the utterance results in multiple alternative meanings due to uncertainty in speech recognition or natural language understanding. There are at least two reasons for providing multiple interpretations:
1. The client application might have additional information, for example, information from a database, that would allow it to select a preferred interpretation from among the possible interpretations returned from the semantic interpreter.
Burnett & Shanmugham Standards Track [Page 99]
RFC 6787 MRCPv2 November 2012
2. A client-based dialog manager (e.g., VoiceXML [W3C.REC-voicexml20-20040316]) that was unable to select between several competing interpretations could use this information to go back to the user and find out what was intended. For example, it could issue a SPEAK request to a synthesizer resource to emit "Did you say 'Boston' or 'Austin'?"
These are elements and attributes representing additional information about the interpretation, over and above the interpretation itself. Side information includes:
1. Whether an interpretation was achieved (the <nomatch> element) and the system's confidence in an interpretation (the "confidence" attribute of <interpretation>).
2. Alternative interpretations (<interpretation>)
3. Input formats and Automatic Speech Recognition (ASR) information: the <input> element, representing the input to the semantic interpreter.
When more than one modality is available for input, the interpretation of the inputs needs to be coordinated. The "mode" attribute of <input> supports this by indicating whether the utterance was input by speech, DTMF, pointing, etc. The "timestamp- start" and "timestamp-end" attributes of <input> also provide for temporal coordination by indicating when inputs occurred.
9.6.2. Overview of Recognizer Result Elements and Their Relationships
The recognizer elements in NLSML fall into two categories:
1. description of the input that was processed, and
2. description of the meaning which was extracted from the input.
Next to each element are its attributes. In addition, some elements can contain multiple instances of other elements. For example, a <result> can contain multiple <interpretation> elements, each of which is taken to be an alternative. Similarly, <input> can contain multiple child <input> elements, which are taken to be cumulative. To illustrate the basic usage of these elements, as a simple example,
Burnett & Shanmugham Standards Track [Page 100]
RFC 6787 MRCPv2 November 2012
consider the utterance "OK" (interpreted as "yes"). The example illustrates how that utterance and its interpretation would be represented in the NLSML markup.
This example includes only the minimum required information. There is an overall <result> element, which includes one interpretation and an input element. The interpretation contains the application- specific element "<response>", which is the semantically interpreted result.
The root element of the markup is <result>. The <result> element includes one or more <interpretation> elements. Multiple interpretations can result from ambiguities in the input or in the semantic interpretation. If the "grammar" attribute does not apply to all of the interpretations in the result, it can be overridden for individual interpretations at the <interpretation> level.
Attributes:
1. grammar: The grammar or recognition rule matched by this result. The format of the grammar attribute will match the rule reference semantics defined in the grammar specification. Specifically, the rule reference is in the external XML form for grammar rule references. The markup interpreter needs to know the grammar rule that is matched by the utterance because multiple rules may be simultaneously active. The value is the grammar URI used by the markup interpreter to specify the grammar. The grammar can be overridden by a grammar attribute in the <interpretation> element if the input was ambiguous as to which grammar it matched. If all interpretation elements within the result element contain their own grammar attributes, the attribute can be dropped from the result element.
An <interpretation> element contains a single semantic interpretation.
Attributes:
1. confidence: A float value from 0.0-1.0 indicating the semantic analyzer's confidence in this interpretation. A value of 1.0 indicates maximum confidence. The values are implementation dependent but are intended to align with the value interpretation for the confidence MRCPv2 header field defined in Section 9.4.1. This attribute is OPTIONAL.
2. grammar: The grammar or recognition rule matched by this interpretation (if needed to override the grammar specification at the <interpretation> level.) This attribute is only needed under <interpretation> if it is necessary to override a grammar that was defined at the <result> level. Note that the grammar attribute for the interpretation element is optional if and only if the grammar attribute is specified in the <result> element.
Interpretations MUST be sorted best-first by some measure of "goodness". The goodness measure is "confidence" if present; otherwise, it is some implementation-specific indication of quality.
The grammar is expected to be specified most frequently at the <result> level. However, it can be overridden at the <interpretation> level because it is possible that different interpretations may match different grammar rules.
The <interpretation> element includes an optional <input> element containing the input being analyzed, and at least one <instance> element containing the interpretation of the utterance.
The <instance> element contains the interpretation of the utterance. When the Semantic Interpretation for Speech Recognition format is used, the <instance> element contains the XML serialization of the result using the approach defined in that specification. When there is semantic markup in the grammar that does not create semantic objects, but instead only does a semantic translation of a portion of the input, such as translating "coke" to "coca-cola", the instance contains the whole input but with the translation applied. The NLSML looks like the markup in Figure 2 below. If there are no semantic objects created, nor any semantic translation, the instance value is the same as the input value.
Attributes:
1. confidence: Each element of the instance MAY have a confidence attribute, defined in the NLSML namespace. The confidence attribute contains a float value in the range from 0.0-1.0 reflecting the system's confidence in the analysis of that slot. A value of 1.0 indicates maximum confidence. The values are implementation dependent, but are intended to align with the value interpretation for the MRCPv2 header field Confidence- Threshold defined in Section 9.4.1. This attribute is OPTIONAL.
<instance> <nameAddress> <street confidence="0.75">123 Maple Street</street> <city>Mill Valley</city> <state>CA</state> <zip>90952</zip> </nameAddress> </instance> <input> My address is 123 Maple Street, Mill Valley, California, 90952 </input>
<instance> I would like to buy a coca-cola </instance> <input> I would like to buy a coke </input>
The <input> element is the text representation of a user's input. It includes an optional "confidence" attribute, which indicates the recognizer's confidence in the recognition result (as opposed to the confidence in the interpretation, which is indicated by the "confidence" attribute of <interpretation>). Optional "timestamp- start" and "timestamp-end" attributes indicate the start and end times of a spoken utterance, in ISO 8601 format [ISO.8601.1988].
Attributes:
1. timestamp-start: The time at which the input began. (optional)
2. timestamp-end: The time at which the input ended. (optional)
3. mode: The modality of the input, for example, speech, DTMF, etc. (optional)
4. confidence: The confidence of the recognizer in the correctness of the input in the range 0.0 to 1.0. (optional)
Note that it may not make sense for temporally overlapping inputs to have the same mode; however, this constraint is not expected to be enforced by implementations.
When there is no time zone designator, ISO 8601 time representations default to local time.
There are three possible formats for the <input> element.
1. The <input> element can contain simple text:
<input>onions</input>
A future possibility is for <input> to contain not only text but additional markup that represents prosodic information that was contained in the original utterance and extracted by the speech recognizer. This depends on the availability of ASRs that are capable of producing prosodic information. MRCPv2 clients MUST be prepared to receive such markup and MAY make use of it.
2. An <input> tag can also contain additional <input> tags. Having additional input elements allows the representation to support future multi-modal inputs as well as finer-grained speech information, such as timestamps for individual words and word- level confidences.
3. Finally, the <input> element can contain <nomatch> and <noinput> elements, which describe situations in which the speech recognizer received input that it was unable to process or did not receive any input at all, respectively.
The <nomatch> element under <input> is used to indicate that the semantic interpreter was unable to successfully match any input with confidence above the threshold. It can optionally contain the text of the best of the (rejected) matches.
<interpretation> <instance/> <input confidence="0.1"> <nomatch/> </input> </interpretation> <interpretation> <instance/> <input mode="speech" confidence="0.1"> <nomatch>I want to go to New York</nomatch> </input> </interpretation>
<noinput> indicates that there was no input -- a timeout occurred in the speech recognizer due to silence. <interpretation> <instance/> <input> <noinput/> </input> </interpretation>
If there are multiple levels of inputs, the most natural place for <nomatch> and <noinput> elements to appear is under the highest level of <input> for <noinput>, and under the appropriate level of
Burnett & Shanmugham Standards Track [Page 105]
RFC 6787 MRCPv2 November 2012
<interpretation> for <nomatch>. So, <noinput> means "no input at all" and <nomatch> means "no match in speech modality" or "no match in DTMF modality". For example, to represent garbled speech combined with DTMF "1 2 3 4", the markup would be: <input> <input mode="speech"><nomatch/></input> <input mode="dtmf">1 2 3 4</input> </input>
Note: while <noinput> could be represented as an attribute of input, <nomatch> cannot, since it could potentially include PCDATA content with the best match. For parallelism, <noinput> is also an element.
All enrollment elements are contained within a single <enrollment-result> element under <result>. The elements are described below and have the schema defined in Section 16.2. The following elements are defined:
The <num-clashes> element contains the number of clashes that this pronunciation has with other pronunciations in an active enrollment session. The associated Clash-Threshold header field determines the sensitivity of the clash measurement. Note that clash testing can be turned off completely by setting the Clash-Threshold header field value to 0.
The <num-repetitions-still-needed> element contains the number of consistent pronunciations that must still be obtained before the new phrase can be added to the enrollment grammar. The number of consistent pronunciations required is specified by the client in the request header field Num-Min-Consistent-Pronunciations. The returned value must be 0 before the client can successfully commit a phrase to the grammar by ending the enrollment session.
The <consistency-status> element is used to indicate how consistent the repetitions are when learning a new phrase. It can have the values of consistent, inconsistent, and undecided.
The <confusable-phrases> element contains a list of phrases from a command grammar that are confusable with the phrase being added to the personal grammar. This element MAY be absent if there are no confusable phrases.
The DEFINE-GRAMMAR method, from the client to the server, provides one or more grammars and requests the server to access, fetch, and compile the grammars as needed. The DEFINE-GRAMMAR method implementation MUST do a fetch of all external URIs that are part of that operation. If caching is implemented, this URI fetching MUST conform to the cache control hints and parameter header fields associated with the method in deciding whether the URIs should be fetched from cache or from the external server. If these hints/ parameters are not specified in the method, the values set for the session using SET-PARAMS/GET-PARAMS apply. If it was not set for the session, their default values apply.
Burnett & Shanmugham Standards Track [Page 107]
RFC 6787 MRCPv2 November 2012
If the server resource is in the recognition state, the DEFINE- GRAMMAR request MUST respond with a failure status.
If the resource is in the idle state and is able to successfully process the supplied grammars, the server MUST return a success code status and the request-state MUST be COMPLETE.
If the recognizer resource could not define the grammar for some reason (for example, if the download failed, the grammar failed to compile, or the grammar was in an unsupported form), the MRCPv2 response for the DEFINE-GRAMMAR method MUST contain a failure status- code of 407 and contain a Completion-Cause header field describing the failure reason.
<!-- single language attachment to tokens --> <rule id="yes"> <one-of> <item xml:lang="fr-CA">oui</item> <item xml:lang="en-US">yes</item> </one-of> </rule>
<!-- single language attachment to a rule expansion --> <rule id="request"> may I speak to <one-of xml:lang="fr-CA"> <item>Michel Tremblay</item> <item>Andre Roy</item> </one-of> </rule>
<?xml version="1.0"?> <result xmlns="urn:ietf:params:xml:ns:mrcpv2" xmlns:ex="http://www.example.com/example" grammar="session:request1@form-level.store"> <interpretation> <instance name="Person"> <ex:Person> <ex:Name> Andre Roy </ex:Name> </ex:Person> </instance> <input> may I speak to Andre Roy </input> </interpretation> </result>
The RECOGNIZE method from the client to the server requests the recognizer to start recognition and provides it with one or more grammar references for grammars to match against the input media. The RECOGNIZE method can carry header fields to control the sensitivity, confidence level, and the level of detail in results provided by the recognizer. These header field values override the current values set by a previous SET-PARAMS method.
The RECOGNIZE method can request the recognizer resource to operate in normal or hotword mode as specified by the Recognition-Mode header field. The default value is "normal". If the resource could not start a recognition, the server MUST respond with a failure status-
Burnett & Shanmugham Standards Track [Page 111]
RFC 6787 MRCPv2 November 2012
code of 407 and a Completion-Cause header field in the response describing the cause of failure.
The RECOGNIZE request uses the message body to specify the grammars applicable to the request. The active grammar(s) for the request can be specified in one of three ways. If the client needs to explicitly control grammar weights for the recognition operation, it MUST employ method 3 below. The order of these grammars specifies the precedence of the grammars that is used when more than one grammar in the list matches the speech; in this case, the grammar with the higher precedence is returned as a match. This precedence capability is useful in applications like VoiceXML browsers to order grammars specified at the dialog, document, and root level of a VoiceXML application.
1. The grammar MAY be placed directly in the message body as typed content. If more than one grammar is included in the body, the order of inclusion controls the corresponding precedence for the grammars during recognition, with earlier grammars in the body having a higher precedence than later ones.
2. The body MAY contain a list of grammar URIs specified in content of media type 'text/uri-list' [RFC2483]. The order of the URIs determines the corresponding precedence for the grammars during recognition, with highest precedence first and decreasing for each URI thereaf